• Sergiy - It isn't the job of programmers/developers to reinvent the wheel of infrastructure: There are a number of infrastructure topologies that already exist to deal with datacentre destruction scenarios, including database mirroring, log shipping and clustering.

    In the assessment of any disaster recovery (DR) strategy, there are trade-offs that have to be made. These include cost, service levels of how long it should take to failover, how much data it is tolerable to lose and how much of a performance hit one is willing to take if one opts for a synchronous data replication topology (be that at SAN level, synchronous mirroring or otherwise) if there is zero tolerance for data loss. Remember also that a DR strategy is one that should be implemented only when there is a total destruction of the primary site environment, so a well run operational department will probably only every implement the plan as part of a DR training exercise.

    Again, as smart as programmers think they are, the people that design these products are much more familiar with the issues surrounding DR (including the need for standardisation of DR plans across systems) than programmers are and they see DR in the context of enterprise operational procedures, not as custom processes for individual applications.

    In short - if you went into any well-run DBA team and told them you had a different process to implement for DR than the other hundred or so systems they had to support, they'd tell you where to go.