I was fortunate during DR testing due to how we planned the fail-over. As in most cases, you have the controlled method and then there is the be happy you have anything method. We used Log Shipping for this with a 15 minute schedule for the file copy process. Depending on what happened and when, it was thought we would lose no more than the last 30 minutes from any given database (factoring in the potential size of the files to copy, network saturation, etc.). The databases related to SSRS were backed up, copied and restored to the DR server(s) daily. I'm the only DBA here, so I documented every step related to fail-over to the point that all anyone would need to do is copy and paste the commands in the event I was not around anymore or otherwise not available. All the hardware and software was an exact twin in DR as well, so there wouldn't be resource shortages.
Log Shipping went a long way in avoiding things described in the article since the database was already there. It also helps that were have only two homegrown databases as the other are behind commercial apps. The two homegrown DBs are meat an potato in that there are no MOTs or other "exotic" objects. Just a few triggers nobody ever remembers exist. Log shipping also proved valuable if someone made an oops. I didn't have to restore anything to grab one row someone messed up. As long as they told me in time before the next log restore.
Then, we virtualized everything. Log Shipping gave way to SAN replication and fail-over controlled via SRM. We tested that for real when we moved data centers. We first moved out DR equipment to the production Equinix facility and prepared it to be the main site. Then we failed over to it from our Tampa production site. It all worked as planned. One piece of advise I'd give concerning a DR site is to not prefix the names of systems with "DR". Now, all the systems, vCenter, etc. have DR at the front of the name. "Yes, I'm sure that is production" is something I've had say more than once.