My recommendation is going to still be get a monitoring tool.
If that is not an option, you are going to need a linked server or SSIS or something in place so your central server can query all of the other servers and get the information. And what will make that fun is what sort of limits are you putting in place? is it a generic limit (like when less than 10% is free in the log send an alert) OR is it configured per server?
Personally, I like the configured per server approach as having 10% free on a 4 GB tlog is a lot different than 10 % free on a 1 TB tlog. And it is likely that your databases have different configurations for tlog size.
The other fun part about that is your central server is going to need a scheduled job to handle the data retrieval and alerting. So you will also need to determine how frequently it needs to run AND make sure you don't have overlap. You also need to decide if you care about historical data or not as that will change the design as well.
My opinion, you are reinventing the wheel and would benefit from a DB monitoring tool. But if that is an approach that you have no control over, I would decide if you want to use linked servers OR SSIS or some other method. The pain in the butt part is that any time you make a new database, you will likely need to update the process. Going with the linked server approach, you can query [LinkedServer].[master].[sys].[databases] to get the list of databases then some fun dynamic SQL to query all of the database sizes (using the DMV sys.dm_db_log_space_usage).
The problem that MAY come up from this is that DMV isn't available in all versions of SQL Server (2008 R2 for example doesn't have it). But if all of your SQL instances are 2016 or newer, you should be good to go.