What can I do to speed up my production daily full-backup?

  • Current backup plan is a full backup every night, starting at 2am. Starts with system database backups, analytics database, and then the full production user database. After the job completes, log backups are taken every 5 mins.. I want to decrease both the size and time of the full user db backup. In the past 2 months, I've cleaned up the user db and dropped it from roughly 500GBs to 430GBs. The size of the full backup for the user db has dropped from 112GBs to 92GBs. The data transfer rate for systems averages at 74 MB/sec, Analytics averages 63 MB/Sec, and the main production database averages 18 MB/Sec. I thought deleting so much data would help decrease how long the full backup takes. Instead, it's increased in the past 2 months by a solid hour. What can I do to increase the data transfer rate of my main production db? Or is there something else I should focus on to decrease how long my full backup takes?

  • You can increase MAXTRANSFERSIZE and BUFFERCOUNT parameters.

    But test before applying to production. I set my maxtransfersize = 4194304 and buffercount = 400 and it reduced backup time by 30-40%.

    You can also consider changing daily backup to diff, leaving only weekend to full. And if your database is partitioned with dedicated filegroups, also try partial backup.

  • Is this to a local drive?

    I would probably check with my hardware person to see what can be done to increase the transfer rate. If it's not, then can it be moved back to a local disk, and then copied off to a secondary location?

    Is instant file initialization turned on if this is a local disk? That may not help much.

    Is data compression on?

    How many backup files are created? One? Try adding more backup files. Start with 4 and see if that affects the speed.

    Have you tried changing the buffercount property of the backup? Caution here, this can cause more issues that it solves.

    http://blogs.msdn.com/b/sqlserverfaq/archive/2010/05/06/incorrect-buffercount-data-transfer-option-can-lead-to-oom-condition.aspx

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • You can backup your larger databases to multiple files. That WILL speed up the backup. I have used that in the past.

  • What kind of disks are you backing up to? I've seen SAN drives run backups of the same database faster than local drives. When we switched to a new SAN, there was a 59% decrease in backup time. The speed of the disks can make a very considerable difference.

  • Have you checked the box to compress the backup? There is lots of air in many databases.

  • johnwalker10 (10/28/2015)


    Have you checked the box to compress the backup? There is lots of air in many databases.

    Empty space doesn't get backed up, only the allocation of empty space. For example, your backup wouldn't contain the empty 1400 empty pages, but it would contain an entry saying that there are 1400 empty pages at the end of the file.

  • When you do your laundry, do you wash the clean clothes or just the dirty ones?

    The same holds true in databases. Typically, there are a couple of tables that qualify as "WORM" tables (audit tables, invoice tables where the invoices have been closed, etc) where after the rows are written they're either guaranteed to never be updated as soon as they're written (audit tables) or shortly afterwards (invoice tables).

    That's "clean laundry" that should live in a separate "online archive" database where the data is divyed up my month (for example) either using Partitioned Views or Partitioned Tables and have synonyms pointing to them from another database (typically, no FK's required, either) so they continue to be "online". I've got a major table that contains call recordings divyed up like that (almost 400 gb) and I have one filegroup and file per month. Once a month goes by, I do a final backup on the month and set the filegroup to READONLY. As a bit of a sidebar, the auditors LOVE that.

    My backups dropped from 6-10 hours to 6-10 minutes because I'm only backing up the current month of data in that large audit table anymore. It also saves me a huge amount of time on things like defrag and index maintenance not to mention making "Piece-Meal" restores both possible and fairly easy to do.

    Done right, it's not a trivial task. It takes careful planning to avoid storing a whole lot of blank disk space in read-only files. You DO have to make a plan and it DOES have a lot of caveats, but it's worth every minute of research and planning.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Some audit tables are WORN (Write Once - Read Never).

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (10/29/2015)


    Some audit tables are WORN (Write Once - Read Never).

    Heh... appropriate, since we just got done talking about clean laundry. 😀

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • csheldon, what Jeff's talking about is some really cool stuff. He's done a SQL Saturday presentation on it before and it was great. I don't know where you are in the world, but if it's on the schedule for one near you, it would be worth attending. Be forewarned that it isn't simple or short, but you sure will learn a lot.

  • Jayanth_Kurup (10/29/2015)


    http://blogs.msdn.com/b/sqlcat/archive/2008/03/02/backup-more-than-1gb-per-second-using-sql2008-backup-compression.aspx

    Interesting read

    What's "interesting" is that it would appear (I may have misread it) that they're putting the backups on the same system as the database and that would not be a good thing I don't care how fast it is.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Although they haven't mentioned it , I think the backups are happening to a different SAN than the ones being used to host the database files ( providing one level of redundancy). I would think they have moved the backup files offsite using SAN replication etc after the backup completes successfully ( providing the second level of redundancy).

    Jayanth Kurup[/url]

  • Jeff Moden (10/29/2015)


    I've got a major table that contains call recordings divyed up like that (almost 400 gb) and I have one filegroup and file per month. Once a month goes by, I do a final backup on the month and set the filegroup to READONLY. As a bit of a sidebar, the auditors LOVE that.

    ...

    Done right, it's not a trivial task. It takes careful planning to avoid storing a whole lot of blank disk space in read-only files. You DO have to make a plan and it DOES have a lot of caveats, but it's worth every minute of research and planning.

    Any documentation on that Jeff? We don't really (I think?!!) have big enough tables to warrant that approach, but I would certainly appreciate seeing a "worked example" of some sort 🙂

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply