This definitely has a practical application.
Quite possible this is no longer relevant with Sql 2k5, but I used to do this extensively in a Sql2k environment with immediate transactional replication. I had two problems that large deletes would cause - the blocking one mentioned earlier, and that fact that a large delete could fill up my transaction replication queue and cause more important updates to wait.
I'm not working on that system anymore, so I don't know if Sql 2k5's transactional replication is any different. Another way to solve the problem would have been to simply not replicate deletes (do the deletes on primary and subscribers), but that would have required us to write a bunch of special handling in the software we wrote that setup the replication.
A final note - I approached this with set rowcount to limit the number of deletes, and actually polled the database every minute to delete a small batch of rows until it was cleaned up. As horrible as that sounds, it kept my transaction logs from being overwhelmed with deletes and falling behind replicating more important things.