sql backup

  • patrickmcginnis59 10839 (10/20/2016)


    Brandie Tarvin (10/20/2016)


    Jeff Moden (10/19/2016)


    Steve Jones - SSC Editor (10/19/2016)


    Jeff Moden (10/18/2016)


    Using compress backups and having a really good team that put together some remarkable hardware, I'm backing up 2.3 TeraBytes to Network Attached Storage (NAS) in 1 hour and 4 minutes. YMMV.

    Wow, that's impressive.

    I agree. I'm totally amazed myself. I don't know the particulars of how they did it but it has to do with some "Nimble" (brand name) hardware they bought. They're using it both for DR and backups. My hat's off to the folks in NetOps where I work. All I did was tweak the buffers settings on my backups and they did the rest. This used to take close to 6 hours... more if the system was under load.

    I'm betting they have de-dup technology they're using for your backups. That stuff changed the way we do our backups and also made them much faster.

    From what I've read, compressed backups don't play well with deduping technology.

    https://www.brentozar.com/archive/2009/11/why-dedupe-is-a-bad-idea-for-sql-server-backups/

    Thanks, Patrick. I'm thinking I have some alternative testing to do.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Or do a smaller backup of a known size using the same infrastructure, as the time taken should be fairly linear (twice as big, twice as long).

    Time of day can be a big issue on some infrastructures running large backups overnight (or conversely with large volues during the day)

    Tim

    .

  • Going back to Krasavita's original question: how long will it take to back up 600GB?

    As seen in the replies, the answer depends on many things. To a large extent, the more money you have to throw at the solution, the faster it will be providing you have people who know how to spend the money wisely (rather than people who just spend money...).

    One thing I did not see explicitly mentioned is the number of backup files. This can also reduce run time because SQL Server backs up a portion of the database to each file in parallel to each other. It is useful if your storage subsystem can handle the IO load, but can actually slow things down if you saturate your storage system with write requests.

    The starting point to your solution should be the duration and performance impact of your backup window. If your backup window allowance allows you to backup 600gb in 4 hours and the backup completes in 3 hours, then you have no problem. If it completes in 5 hours then you need to look at how you can improve performance. If your backup runs without impacting the performance SLA of your OLTP system then you have no problem. If running a backup effectively stops your OLTP system then you need to look at how you can improve performance.

    Even if you have no problems today, you should experiment to see how big your database will be or how busy your OLTP will be before you start to get problems. You can then do some capacity planning to predict when this will happen with live data, and get a budget organised to sort out the issues before they become problems.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • EdVassie (10/25/2016)


    One thing I did not see explicitly mentioned is the number of backup files.

    I don't know if this is true of all dedup technology, but the stuff we use at Allstate automatically breaks backups in to 4 files if the database is over a certain amount of GB.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • EdVassie (10/25/2016)


    One thing I did not see explicitly mentioned is the number of backup files. This can also reduce run time because SQL Server backs up a portion of the database to each file in parallel to each other. It is useful if your storage subsystem can handle the IO load, but can actually slow things down if you saturate your storage system with write requests.

    I'm glad that you mentioned the possible slowdown. It used to work great when disks where smaller and you had better control over which spindle(s) you were aiming each file at. Those days are long gone (I miss the smaller disk sizes put not their relatively poor performance back then) and I've seen many folks that have tried the multi-file backups only to be grossly disappointed by the head thrashing that occurs even on fast SANs.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Brandie Tarvin (10/25/2016)


    EdVassie (10/25/2016)


    One thing I did not see explicitly mentioned is the number of backup files.

    I don't know if this is true of all dedup technology, but the stuff we use at Allstate automatically breaks backups in to 4 files if the database is over a certain amount of GB.

    You should try it at least once without the breakup. You might be surprised.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (10/26/2016)


    Brandie Tarvin (10/25/2016)


    EdVassie (10/25/2016)


    One thing I did not see explicitly mentioned is the number of backup files.

    I don't know if this is true of all dedup technology, but the stuff we use at Allstate automatically breaks backups in to 4 files if the database is over a certain amount of GB.

    You should try it at least once without the breakup. You might be surprised.

    I wish I could. Some other team up in corporate implemented the technology and we didn't have any input or control over it. I don't even have information on the tools they were using to do the dedup. Which makes me very sad as I would have loved to play with it and learn about it.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Local Backup of 750GB database (using LiteSpeed) takes about 30 mins

  • barsuk (10/28/2016)


    Local Backup of 750GB database (using LiteSpeed) takes about 30 mins

    If, by "local backup", you mean putting the backup on the same box as the MDF/LDF files, that's pretty normal. It's also one of the most dangerous things that you can do with backups because if you lose the box, you lose the data and the most recent backups.

    Backups have to be to some place besides the same SAN (or whatever) that the data lives on. That's why, despite all the warnings about sometimes much slower backup performance, especially where NAS (Network Attached Storage) is concerned (although we did well), it's still the best thing to do.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • In that particular case, after the local backup, as the next step, we copy the local backup to the network share. Besides that, we also run a network backup...

  • In order to say the backup was "complete", you would also have to include the time it takes to move the file from the local disk to the final resting place for the backups files. How long does that take (just curious)?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Copy part took about 15mins earlier today.

    Network Backup takes about 22-24mins for 750GB Database using LiteSpeed Backup.

  • barsuk (10/30/2016)


    Copy part took about 15mins earlier today.

    Network Backup takes about 22-24mins for 750GB Database using LiteSpeed Backup.

    That's damned good!

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • :-P, Thanks

  • 750 GB database backed up in 30 minutes? Is that 750 GB of compressed backup data on disk? Database size is pretty much irrelevant in any discussion since we won't know how full a database is that you're talking about or if it is uncompressible binary data. I'm having trouble believing you guys are getting 430 MB/sec sustained disk writes. Maybe you're all working for deep pocket corporations with large SSD RAIDs.

Viewing 15 posts - 16 through 30 (of 30 total)

You must be logged in to reply to this topic. Login to reply