This editorial was originally published on Feb 5, 2007 and is being re-run as Steve is on vacation.
I have to be honest with you. It wasn't me. This advice on DR wasn't given to anyone by me. This post is from a blog entry last year before Tech Ed. I made a note of it, but it wasn't until recently I got around to actually writing about it.
I'm kind of surprised to see some of this advice being given to people. Some of these I don't think are too bad, but would you do any of these on your production database?
"Just run REPAIR_ALLOW_DATA_LOSS and you'll be fine..."
Just rebuild your transaction log using these steps..."
"Just restore your database and carry on..."
"Run CHECKALLOC, then CHECKDB, then CHECKTABLE on all your tables, then..."
"Just flick the power switch on and off a few times on one of the drives..."
Actually I've done #4 and #3 is something I've had to do before as well. I can appreciate the caution in the article about not finding the root cause, but I've had more problems than I'd like to think about where we couldn't find a root cause in a reasonable time and decided to move on. And we never had the issue again. In the interests of getting the business going, there was a time where I explained everything, guesstimated the data loss, and we decided to just restore, lose the data, and have people re-enter it as quickly as possible.
As for the other items, I don't think I'd run REPAIR_ALLOW_DATA_LOSS without someone from CSS on the phone. And I don't think I've ever even heard about anyone "rebuilding" a transaction log. That sounds like one of those urban myths where someone heard that someone said that they had a way to rebuild a log.<
Flicking the power on your drives? I'm not sure what I'd even say to someone who suggested it.