Home Forums Database Design Disaster Recovery Current best practice wrt very large databases (100GB) + wrt Backup / Restore / Archive ? RE: Current best practice wrt very large databases (100GB) + wrt Backup / Restore / Archive ?

  • Instead of log shipping if you're looking at offsite, check out mirroring. It's much more resilient.

    In general, you're doing things right. There's two issues when it comes to backups, RTO and RPO, or Recovery Time Objective and Recovery Point Objective. Taking RPO first, this is basically how much data loss the company can live with. The answer is always ZERO, but that costs a fortune. You're already largely doing well with this with a full backup and log backups, so except for setting up mirroring or upgrading to 2012 and getting synchronous availability groups in place with offsite storage, you can't do much more. RTO is the hard part.

    For an entire server going away, you have two options. First, you're exploring, a second server, somewhere else, with your data completely, or nearly completely, up to date and ready to go. Done. Second, recovery to a new server. With the second option, there's no magic bullet. It just takes time. So, two things you can do to mitigate this. First, practice the restore process as often as you reasonably can so that you know how to get it done when the time comes. Second, prioritize the restores so you get the most important stuff online first and the rest second. Additional stuff, like you already have, such as differential backups can help but they're going to add overhead as well. You just have to deal with it.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning