Had this been an actual emergency….

  • Comments posted to this topic are about the item Had this been an actual emergency….

  • So true. I know that since I'm at a facility on the eastern shore, we're also prone to hurricanes. We actually shift over to our backup facility on all systems for a week twice a year - prior to the start of hurricane season, and after. Actually running off of your backup systems lets you see how they will respond, etc. And you get to test / refine your procedures.

    Another thing to note is that an orderly transition to your backup facility during a test is far different than during an emergency. Following our normal rollover procedures takes hours. However, earlier this year we had a hardware failure that took our primary server down. We were up and running at our backup facility (for just that server) in about 15 minutes.

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

  • When I was on the East Coast, we made contingencies, but our company could actually survive for a day or two without the systems. So it was harder to convince them to do serious DR testing.

    When I got to Denver, we had a separate contract with a facility and we tested once a year. They didn't always go well, and I think some of that was our lax attitude. We thought "we'll make a note of that and fix it in the docs" and sometimes did, sometimes didn't.

    I'm a little torn on the pressure. On one hand, you could make these unannounced, but there would have to be a penalty (maybe cut .5% from your raise for this year) or a bonus ($1k if it succeeds in xx hours) for people to take them seriously. On the other hand, the likelihood of a disaster is low, even in areas that could get one.

    New Orleans has been around for a long time without getting a hurricane. Could easily have been another 5 years. In Denver, we had a century snow about 6 years ago, 5ft in 3 days, and a number of roofs collapsed. How likely is it we'll get one again? How much money is it worth for any company to get a 2nd site? It can be expensive.

    I think that you ought to try and split out your operations if it makes sense. If you have a second site. If you don't, then can you set up in a remote facility? Denver is a great place, and I have a few Florida companies that put their servers here because we have a low probability of a disaster. Rodney, you're welcome to do the same, I'll buy you a beer if you come out.

    I think walking people through the process is "good enough" for most businesses. If you really can't get by a day or two without service, spend more and do more prep.

    One last note, I'd schedule one person just to document things. Not to do work, but just note what works and what doesn't.

  • Some years back, I was contracted as an "Interim IT Manager" at a company in Memphis, TN, during the transition from local ownership and IT processing to a regional IT processing under a new management (the one that acquired the local company). One of the benefits that was touted for the centralized approach to the IT processing was that the larger comany that now owned the facility would be better prepared to handle any and all contingencies . . . including having a Disaster Recovery Plan.

    After about 3 months of preparation of the DR Plan, I got a call in the middle of the night announcing that a "Disaster" had struck and the plan was being implemented (as a drill, of course). I dutifully notified the local staff and the night shift operator. I then eexplained to the other 2 operators that I wanted them to come in 45 minutes before the end of the shift that they were relieving so that we could brief them on the situation at that point and have a smoother transition. (That last bit was not an official part of the DR Plan but one that I had tried to get put into it . . . without any luck, since I was "just a contractor." 😉

    The plan involved dismounting all the disc platters from the Atlanta facility and flying the entire crew to a Florida location where there was an "identical" facility. So, after getting to the local office, I called in and they were just leaving for the airport. A little over 2 hours later, I got a call announcing that they had landed and were en route to the emergency facility. About an hour after that, I got a call regarding which IP adress we should be conmnecting to for any communications with the replacement mainframe. About an hour after that I got another call . . . stand down, the drill is over, the drill has been aborted.

    The post mortem, late the next day, revealed that the disc platters had been dismounted, as per the instructions in the plan, and, in fact, every instruction in the plan had been dilligently and meticulously followed. Unfortunately, there was no instruction in the plan for putting the disc platters on the airplane, so,when they got to the Florida facility and it came to the instructions for mounting the disc . . . there weren't any discs to mount.

    That was my first experience with a DR Plan and I learned an important lesson from it, as expressed by the IT manager to whom I reported a the time:

    Before conducting the Disaster Drill, walk through the process on site with an office designated as you "transportation" and make sure that everything is accounted for.

    Ralph D. Wilson II
    Development DBA

    "Give me 6 hours to chop down a tree and I will spend the first 4 sharpening the ax."
    A. Lincoln

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply