I'd read your initial article a couple times to ensure I understood the chain of events and the timing of the issues. I've ran into a similar situation within the last year, and wanted to compare notes after the holiday. I'm uncertain whether your event was preventable, but we've determined ours was.
Firstly though, can you confirm your environment specs? We're most likely not comparing apples to apples, as we're running Windows 2k8 and SQL 2k5, currently, with a plan to skip 2k8 for 2012. In one case far in my past, the simultaneous outage for Exchange and SQL were caused by a single system issue, as the business had been stingy on funding a separate server for each. A simple drive fill caused by a mislocated tempdb caused the system failure. In the recent past, the issue was caused by and unscheduled & uncommunicated security alteration from the separate AD group in the company. In your case, it sounds more like a specific OS registry corruption on the Exchange server, and a separate but coincidental total failure of your RAID5 drive configuration for the SQL server.
Ignoring the mail server issue for a moment, was there monitoring for drive failure within your SQL RAID configuration? In addition, does your SQL instance use the Exchange server as the SMTP destination for any server alerts thru DBMail? Perhaps it wasn't as much of a coincidence after all.
I look forward to your comments in response. Hopefully, an alteration to the current recovery standards will help minimize recovery times in case of another event.