Backing up a production SQL Database

  • Our third party backup tool stopped working and I found out more or less by accident - weekly DB maintenance caused the log file to autogrow to 270 GB.

    So I am going to start doing backups in a different way from the inherited way 🙂

    I need to retain backups accessible for four days, daily full backup, differential backups every four hours and T-log backups every fifteen minutes. Full backup takes roughly 20 minutes including verify.

    What I am going to do is the following;

    1. Create five backup devices (Device0.bak through Device4.bak)

    2. Create full backup maintenance plan

    2.a. first step deletes device4.bak, then renames device3->4, 2->3, 1->2, 0->1

    2.b. seconds step performs a full backup of the database to device0

    2.c. enable a schedule for 8 PM daily

    3. Create differential backup maintenance plan with one step

    3.a. perform differential backup into device0

    3.b. enable scheduling every four hours, starting at 12 AM through 6 PM

    4. Create T-log backup maintenance plan with one step

    4.a. performe T-log backup into device0

    4.b. enable scheduling every fifteen minutes, between at 8.30 PM and 7.45 PM (to allow the full backup to finish running 20 minutes)

    Is this a sound plan? Any points to consider?

  • why not just create unique backup file names by using either the date and time of the backup or the current LSN number??

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • I'm with Perry. One of the things that causes problems with DR and restores is usually complexity that you struggle with under pressure. Or someone else misunderstands.

    I'd set backups with a unique name, using date/time in the name so that I can easily find what I need in a file system. I'll still double check before restoring, but finding those files quickly, and people understanding what they are, are important.

    I don't like backup devices, personally, and prefer just files, always INIT, always a new file for a new backup.

  • Hm, I thought keeping all the related backups in a single file would make things simpler.

    If I need to restore a backup from today, I will restore from device0, yesterday from device1 etc. Each device is pointing at a single .bak file.

    If I create multiple backup files can the restore function then piece together which ones to restore to get to a single point in time, or will I need to find and restore the files by hand (SQL script)?

  • rrn 62873 (11/11/2014)


    Hm, I thought keeping all the related backups in a single file would make things simpler.

    Simpler to lose all your backups in one go, yes.

    Backup production DBs to unique files, preferably with the time stamp as part of the file name.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks for the answers. Losing all my backup files in one go was not what I aim to do. I suppose that has to do with possible corruption in the file rendering all the backups inside unusable, point taken.

    I will create a backup with individual file names instead and stuff them inside a directory and look for a script to delete logs older than my retention threshold.

    Thanks again, I am trying to learn.

  • I suppose that has to do with possible corruption in the file rendering all the backups inside unusable, point taken.

    No, I was more thinking about someone accidentally deleting a file. Or moving instead of copying. Corrupt file header would do the job too though.

    Why are you rolling your own solution?

    Maintenance plans work well enough for backups, or there's Ola Hallengren maintenance package (http://ola.hallengren.com/)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Maintenance plans can work well for trimming older files as well. I like Ola's solution for backups. It works well.

  • GilaMonster (11/11/2014)Why are you rolling your own solution?

    Maintenance plans work well enough for backups, or there's Ola Hallengren maintenance package (http://ola.hallengren.com/)

    This task was dumped on my desk because I recently got a SQL Server MCSA for BI purposes (which is actually a separate set of databases) and a nightly SQL Agent job on the production OLTP database failed due to a huge logfile which was the result, I think, of another job performing index and statistics maintenance and the T-log backups failing (using DPM, I discovered). I have never done backups in a production environment before and I like to know what I am doing and the consequences. I am not at all dead set on rolling with my own solution and am now investigating Ola Hallengren's solution. But yesterday was a matter of just getting the backups rolling again.

    I did notice a rather nasty problem with using a single .bak file for both full, diff and t-log backups. A t-log backup at the same time as full or diff backup would fail due to the file being in use. This is by no means a lasting solution, but temporarily we can live with a single t-log backup being skipped in favour of a diff or full backup.

    My current idea is to use Ola's solution for backups via maintenance plans. But would I not be required to create three maintenance plans (one each for full, diff and t-logs) and three schedules?

    I am very grateful for the replies.

  • Well, three jobs. You don't need to use maint plans for Ola's stuff.

    Why is that a problem? Every single prod SQL instance I set up maintenance on gets 4 jobs automatically - full backup, log backup, index maintenance, consistency checks.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (11/12/2014)


    Every single prod SQL instance I set up maintenance on gets 4 jobs automatically - full backup, log backup, index maintenance, consistency checks.

    I would think (or rather hope) that this scenario is a common one. Those are the common maintenance tasks that have to be done to every SQL Server I've ever seen.

    rrn, I think you should take the advice - skip the maintenance plans and go with jobs that run procedures. Ola's solution is already well-tested.

  • I also use Ola's solution as the basis for backups, though with a few slight tweaks for a vendor app environment. I also very much support no more than one backup per file, with date/timestamps in the filenames. Note that I personally choose to take differential backups of many databases, and I use a '.dif' extension so I and my automatic restore procedure can easily tell by filename what's what.*

    Note that the #1 thing you should do is test your restores!

    Also, test PITR (Point in Time Restore), to make sure that it works, and that you're comfortable with it, before you get a high-stress issue where you need to do it for real.

    *Why do I care about telling by filename when msdb keeps track? Because if the server catches on fire, you can't get to MSDB, but you either can get to your backup files, or you're going to need to get to your resume.

  • I don't think I've seen a link to Ola's maintenance solution on this thread, so https://ola.hallengren.com/.

    His processes are first-rate and regularly maintained to cope with new features and problems.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • We have Ola's solution here as well: http://www.sqlservercentral.com/scripts/Backup/62380/

    You might also look at the MidnightDBA reindex solution, which was built to address some situations that proved problematic in other ways: http://www.sqlservercentral.com/scripts/Backup/62380/

    I've also used the SQLFool reindexing before: http://sqlfool.com/2011/06/index-defrag-script-v4-1/

  • Thank you for all the replies, tips and hints. They are very useful.

    Using Ola's solution, I have now set up a solution where the production database gets maintained (DBCC and index rebuild/defrag) every saturday, differential backups every four hours and transaction log backups every fifteen minutes. Backups are written to separate files (which is default in Ola's scripts).

    I have tested restores (latest and PITR both) and the backup files are mirrored across two physical sites as well as written to tape that is transported to a different location altogether.

    I hope I am good to go on the backups now. Again, thank you all that helped.

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply