sql backup

  • Do you have any idea how long it might take to sql backup for 600 gb.

    Thank you

  • Really depends on your CPU and IO subsystem, as well as the load on your system. Also, if you use multiple files and disks or not.

    Ultimately, I could take a guess, but you are better off just running a backup. This doesn't impact users and can be done online.

  • Krasavita (10/18/2016)


    Do you have any idea how long it might take to sql backup for 600 gb.

    Thank you

    Finger in the air type guess, 8-16 core VM with a 10Gb SAN storage, 3-4 hours max but as always it depends.

    😎

    Curious, why are you asking, do you have any numbers for any backup on the same system? Are you using compression? How much disk space do you have? Are the backups to tape or disk? Are the backups striped to different luns?

  • On most of my customer's host boxes I'm used to seeing 2GB to 4GB per minute.

    There are always tricks such as Differential backups or log backups if one has a short backup window.

  • Using compress backups and having a really good team that put together some remarkable hardware, I'm backing up 2.3 TeraBytes to Network Attached Storage (NAS) in 1 hour and 4 minutes. YMMV.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (10/18/2016)


    Using compress backups and having a really good team that put together some remarkable hardware, I'm backing up 2.3 TeraBytes to Network Attached Storage (NAS) in 1 hour and 4 minutes. YMMV.

    Redundant controllers / SSD array? We get around 0.5 TeraBytes per hour and I'm sure we're controller-limited.

    Me: "You've got data/log/tempdb on one drive array with one controller - you know best practice is to have at least three separate storage units with their own controller, right? Here's some best practice articles..."

    Them: "Yes we know but we're using an array of superfast SSD's, splitting into two or three with a controller each won't make any difference, if we want it to work faster we'll simply add more SSD's to the array"

    Me: "You're putting everything through one controller. From time to time it's receiving an instruction before the last instruction has completed and your iostalls are consequently very high. Think about logging data changes"

    Them: "Compared to what? Prove it"

    I've been through this exercise and the improvement was dramatic (for very little outlay too) but it was with an array of rusty drives, not SSD's. If anyone knows of published comparative tables of iostalls or case studies of moving from single-array storage to best-practice storage, then please share.

    β€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • It also depends on if you're doing a FULL, DIFFERENTIAL, or TRANSACTION LOG backup. Diffs will always be faster than FULLs unless things change really fast and really often, which could make the data backed up in a Diff the same as data backed up by a FULL.

    I guess the real question is, what are you trying to do with this information? Is there a problem you're trying to solve or a plan you're trying to make?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Jeff Moden (10/18/2016)


    Using compress backups and having a really good team that put together some remarkable hardware, I'm backing up 2.3 TeraBytes to Network Attached Storage (NAS) in 1 hour and 4 minutes. YMMV.

    Wow, that's impressive.

  • Krasavita (10/18/2016)


    Do you have any idea how long it might take to sql backup for 600 gb.

    Thank you

    As stated, it depends on your hardware and configuration.

    As a rough guide, I'm backing up a 50GB (data and log file) database which produces a 20gb uncompressed or 2Gb compressed backup, this completes in 4.5 minutes.

    I'm running this on a VMware virtual machine with bog standard virtual hard disks.

    Worst case 1 hour tops, best scenario could be 20-30 mins if you're lightning fast like Jeff "Lightning bolt" Moden πŸ˜€

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" πŸ˜‰

  • We need facts to be able to give you good advice. Things such as:

    are you backing up to disk or tape?

    are you backing up on the same server?

    if you aren't using the same server...is your backup going through a firewall or across a network?

    what kind of hardware are you using?

    Honestly, the best way to find out how long it will take is to do the backup.

    -SQLBill

  • Steve Jones - SSC Editor (10/19/2016)


    Jeff Moden (10/18/2016)


    Using compress backups and having a really good team that put together some remarkable hardware, I'm backing up 2.3 TeraBytes to Network Attached Storage (NAS) in 1 hour and 4 minutes. YMMV.

    Wow, that's impressive.

    I agree. I'm totally amazed myself. I don't know the particulars of how they did it but it has to do with some "Nimble" (brand name) hardware they bought. They're using it both for DR and backups. My hat's off to the folks in NetOps where I work. All I did was tweak the buffers settings on my backups and they did the rest. This used to take close to 6 hours... more if the system was under load.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (10/19/2016)


    Steve Jones - SSC Editor (10/19/2016)


    Jeff Moden (10/18/2016)


    Using compress backups and having a really good team that put together some remarkable hardware, I'm backing up 2.3 TeraBytes to Network Attached Storage (NAS) in 1 hour and 4 minutes. YMMV.

    Wow, that's impressive.

    I agree. I'm totally amazed myself. I don't know the particulars of how they did it but it has to do with some "Nimble" (brand name) hardware they bought. They're using it both for DR and backups. My hat's off to the folks in NetOps where I work. All I did was tweak the buffers settings on my backups and they did the rest. This used to take close to 6 hours... more if the system was under load.

    I'm betting they have de-dup technology they're using for your backups. That stuff changed the way we do our backups and also made them much faster.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin (10/20/2016)


    Jeff Moden (10/19/2016)


    Steve Jones - SSC Editor (10/19/2016)


    Jeff Moden (10/18/2016)


    Using compress backups and having a really good team that put together some remarkable hardware, I'm backing up 2.3 TeraBytes to Network Attached Storage (NAS) in 1 hour and 4 minutes. YMMV.

    Wow, that's impressive.

    I agree. I'm totally amazed myself. I don't know the particulars of how they did it but it has to do with some "Nimble" (brand name) hardware they bought. They're using it both for DR and backups. My hat's off to the folks in NetOps where I work. All I did was tweak the buffers settings on my backups and they did the rest. This used to take close to 6 hours... more if the system was under load.

    I'm betting they have de-dup technology they're using for your backups. That stuff changed the way we do our backups and also made them much faster.

    That, indeed, was one of the things that they mentioned. I'm not sure how that works because the file sizes on the backup drives haven't decreased perceptibly and it wouldn't help much for transmission times, but it does appear to work just fine and I've verified that fact. In fact, I verify it every night by doing a full restore of my two fairly large "money maker" databases every night right after backups.

    The technology behind all of this is just amazing.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Backing up 2.3 Terabytes in 64 minutes

    is 2.15 TB/Hr

    is 36.8 GB/Min

    is 628 MB/Sec

    A very good SSD can sustain sequential writes at 200 MB/Sec so maybe with a striped backup to several SSDs you can sustain 628 MB/Sec but I'd like to see it.

    It would be much more believable if compression brought the file size down to 1/5 or 460 GB since the CPUs likely wouldn't be a bottleneck.

  • Brandie Tarvin (10/20/2016)


    Jeff Moden (10/19/2016)


    Steve Jones - SSC Editor (10/19/2016)


    Jeff Moden (10/18/2016)


    Using compress backups and having a really good team that put together some remarkable hardware, I'm backing up 2.3 TeraBytes to Network Attached Storage (NAS) in 1 hour and 4 minutes. YMMV.

    Wow, that's impressive.

    I agree. I'm totally amazed myself. I don't know the particulars of how they did it but it has to do with some "Nimble" (brand name) hardware they bought. They're using it both for DR and backups. My hat's off to the folks in NetOps where I work. All I did was tweak the buffers settings on my backups and they did the rest. This used to take close to 6 hours... more if the system was under load.

    I'm betting they have de-dup technology they're using for your backups. That stuff changed the way we do our backups and also made them much faster.

    From what I've read, compressed backups don't play well with deduping technology.

    https://www.brentozar.com/archive/2009/11/why-dedupe-is-a-bad-idea-for-sql-server-backups/

Viewing 15 posts - 1 through 15 (of 30 total)

You must be logged in to reply to this topic. Login to reply