Not a bad solution. I've done something similar, but I let each server query itself and store the results in a table. Then I have a master server that queries all these servers (use DTS, transform task) to roll up all information.
Couple things I found on jobs:
1. Don't like to do steps. I just want to know what failed. I can dig in if it's important. Also, for some jobs this gets cumbersome, like log dumps. We dump every 15 minutes. Having that fail all night really munges up the report, so I only report a single failed instance.
2. I store the last time I ran the report. Why? If the report job fails today and I spend a day getting it fixed, I want to get the report to return everything I may have missed, so I'll want more than a days worth of data. Plus this handles weekends as well.