My thoughts exactly... Uh...Why don't you use the article and publication validation that ships with the product and has been there for multiple versions?
sp_article_validation allows you to do both a rowcount and a rowcount+checksum
sp_publication_validation just runs sp_article_validation for all articles in the publication
Both account for filtering. The reason that they both have a checksum option is because it is possible for the row counts to be the same, but the data is still out of synch.
If you are running replication on SQL Server 2005 and higher, tracer tokens tell you precisely what you throughput is. How fast a single article is sending data is completely irrelevant, because the replication engine doesn't transmit on an article by article basis. It transmits in batches for the entire publication. So, the throughput of one article is affected by every other article in the publication. As long as you are posting tracer tokens are regular intervals, then when things fall behind, you can use the tracer tokens to tell that your subscriber is caught up to the publisher as of an exact time of the day.
Additionally, the replication monitor will show you how many rows are still pending at any given time for each publication. It will also use the most recent throughput numbers (either from a tracer token or a batch that was just replicated) to compute "how long until I catch up".
There is a special set of validation procedures for merge replication, but tracer tokens are only valid for transactional replication.
While it's a nice academic exercise, I wouldn't use anything except the built in validation procedures + tracer tokens.
President - Champion Valley Software, Inc.