Hmm - there are some nasty potential problems with doing what you describe and I would not recommend it in production:
1) what if the clustered index being rebuilt is very large? How to cope with the resulting potential backlog of transactions on the principal, and probable large REDO queue on the mirror? What about the transasction log growth on the principal from having to cope with the fully-logged index rebuild? What about the knock-on effect on log shipping, transactional replication, etc?
2) what if the I/O subsystem on the (new) mirror is damaged and the rebuild cannot be replayed? What do you suggest as the way forward if the mirror stops with a failure during replay of one of the log records from the index rebuild?
And apart from that, you don't go into details of how to make sure the problem won't happen again (i.e. root cause analysis of the original failure).
Depending on database size and network bandwidth, my recommendation may be to break the mirroring partnership, do root-cause analysis to make sure the I/O subsystem on the old principal is sound, and then re-initialize the partnership.
It's a neat idea that you're proposing, but you need to think through all the consequences and potentialities for VLDBs and for further failures before recommending to others.
: Check out SQLskills online training!
SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005