Backup questions for 612GB database!

  • We have a database that is over 612Gb and growing. Our backup is being done on tape using Backup Exec 2012. This process starts at 10PM and runs until 6AM every day. We do an incremental backup every 15 minutes so that we can restore up to any 15 minute windows if necessary. The nightly backup is a full backup with transaction logs so that we release the space back to the log file.

    My question is this: What is the best way to perform this backup so that we can perform in in the least amount of time possible and yet make sure to keep the transaction logs clean as well. We were thinking of setting up a load balancing type of configuration but will this help with our backup jobs? One option is to convert the server into a virtual server (VMware) where more spindles are working with the database.

    Any suggestions on how I can get the backups done more efficiently? I should mention that during the backup process its almost impossible to use the application that writes to the database because we get time out errors.

  • Try backups to disk, later copied to tape if/as necessary. 620GB isn't huge, backups of that size DB shouldn't take hours.

    If you're having timeouts, examine the latencies and throughput of the data drive. Could be you're swamping it with the reads that the backup does. May need more of an IO subsystem than you have.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • convert your server to Virtual Machine... is the best idea.

  • i'm using Snapshots with VMWare and working ok ...

  • We started out on VM but the performance was horrible. I think we have a better environment now but I am still a bit scared to move it back. As a backup point of view I can see VM being a better solution but what about database access and latency? The consulting firm is telling us that we can run with minimal memory and increased I/O but I have been burned by this before and a bit afraid to go back.

  • For Database (in Virtual Server) requires good infrastructure, for example NetApp, etc..

  • I agree with Gail, you should be backing up to a separate disk or raid group/set of spindles, disable SQL backup compression, then use Backup Exec 2012 to do data deduplication and store it on tape. I'm assuming the tape backup is the performance bottleneck.

    You should first test the theoretical max backup performance by writing to the NUL device.

    Adjust the BUFFERCOUNT and MAXTRANSFERSIZE (1MB) and BLOCKSIZE.

    -- Try different MAXTRANSFERSIZE

    BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 25, MAXTRANSFERSIZE = 524288

    BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 25, MAXTRANSFERSIZE = 1048576

    BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 25, MAXTRANSFERSIZE = 2097152

    BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 25, MAXTRANSFERSIZE = 4194304

    --Try different BUFFERCOUNT with best MAXTRANSFERSIZE

    BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 10, MAXTRANSFERSIZE = 2097152

    BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 40, MAXTRANSFERSIZE = 2097152

    BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 100, MAXTRANSFERSIZE = 2097152

    --Try different BLOCKSIZE

    BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 100, MAXTRANSFERSIZE = 2097152, BLOCKSIZE = 1024

    BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 100, MAXTRANSFERSIZE = 2097152, BLOCKSIZE = 16384

    BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 100, MAXTRANSFERSIZE = 2097152, BLOCKSIZE = 65536

    Once you find the max possible speed, backup to mulitple files on a raid or SSD.

    Good example here:

    http://henkvandervalk.com/how-to-increase-sql-database-full-backup-speed-using-compression-and-solid-state-disks

    Im backing up a 250GB database in around 12 minutes. Max backup speed was 340MB/sec with Raid5 disks using NUL device and 280MB/sec with 8 backup files.

  • support-555807 (4/30/2014)


    We have a database that is over 612Gb and growing. Our backup is being done on tape using Backup Exec 2012. This process starts at 10PM and runs until 6AM every day.

    The backup direct to tape will be the bottleneck since you're constantly waiting for the tape device which will be inherently slower than a hard disk drive. As Gail has advised, backup to disk first then let BackupExec pickup the SQL backup files to tape.

    Backup compression, whether native or 3rd party, will help. There'll be a CPU hit but you will be writing less data to disk since the data is compressed.

    How big is the final backup file for the database?

    support-555807 (4/30/2014)


    One option is to convert the server into a virtual server (VMware) where more spindles are working with the database.

    This is not a VMware specific option. You can get more spindles purely by sweet talking your storage admin to provide the appropriate disk devices.

    Virtualising a server can have anywhere between a 10-15% performance overhead. Virtualisation is a consolidation exercise, you can get good performance if it's done correctly but even then there is the Hypervisor overhead.

    support-555807 (4/30/2014)


    We started out on VM but the performance was horrible.

    My point exactly!

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • M-Hol (4/30/2014)


    convert your server to Virtual Machine... is the best idea.

    I disagree. There's nothing magical about VMs that makes performance problems disappear, all too often it's the opposite.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • support-555807 (4/30/2014)


    We do an incremental backup every 15 minutes so that we can restore up to any 15 minute windows if necessary. The nightly backup is a full backup with transaction logs so that we release the space back to the log file.

    My question is this: What is the best way to perform this backup so that we can perform in in the least amount of time possible and yet make sure to keep the transaction logs clean as well.

    One thing in your question jumped out at me, the bold bit above...

    If you're looking to keep the transaction log size down, I'd start with Gails suggestion, and switch the incrementals to Log backups. This will keep your log file size down, allow point-in-time recovery, and TLog backups tend to be quick.

  • jasona.work (5/1/2014)


    support-555807 (4/30/2014)


    We do an incremental backup every 15 minutes so that we can restore up to any 15 minute windows if necessary. The nightly backup is a full backup with transaction logs so that we release the space back to the log file.

    My question is this: What is the best way to perform this backup so that we can perform in in the least amount of time possible and yet make sure to keep the transaction logs clean as well.

    One thing in your question jumped out at me, the bold bit above...

    If you're looking to keep the transaction log size down, I'd start with Gails suggestion, and switch the incrementals to Log backups. This will keep your log file size down, allow point-in-time recovery, and TLog backups tend to be quick.

    I'd agree with this. With a database of this size I'd consider switching to a weekly full backup and daily differential as well.

  • Here is a thought you could try if you haven't already. Instead of backing up to one file backup to 4 files. I have seen backup times for large dbs run quicker backing up to multiple files instead of one backup file.

  • i'm using Snapshots with VMWare and working ok ...

    Wooooooh... vCenter snaps aren't backups. No way. Great for a fall back before a change but not backups. If you leave a snap in place, the VMware disk delta will grow and grow and you may not be able to commit it back to disk (I have seen some over 2 years old, that did work though :-)) - best practice is to delete ASAP after you've confirmed the system is OK.

    Even array level snaps, like NetApp aren't (in my view) sufficient unless you are taking them off array too (NetApp SnapVault for example)

    Veeam gives some good VM backup to another location (NAS, ESX server, other datastore etc.) for a reasonable cost (I am not affiliated with Veeam ;-))

    OP - you do probably need to look at at least staging that backup to disk before tape, ideally without compression to get it off and done in your backup window to a system that can feed it to tape / replicate offsite / compress / dedup at its leisure.

  • glynn.seymour (5/2/2014)


    Wooooooh... vCenter snaps aren't backups. No way.

    Permanent snapshots can indeed be problematic.

    +1, Theyre always misused, snapshot disks, back them up then remove snapshot.

    This is exactly how the old VMWARE consolidated backup and the new VMWare Data Recovery utilities work.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Here is my own input on the matter;

    Since the database is still growing, you will need to come up with a decent backup strategy. eg.

    FULL BACKUPS on WEEKENDS (Saturday at 10:00PM) instead of nightly.

    Backup to disk and Archive to tape with a retention policy that doesn't conflict with RPO, RTO and BC (Business continuity).

    Take daily differentials (That way, you can do a restore with stop at if you need to do point-in-time recovery.

    or

    Take log backups every 15 or 30 minutes with truncate (so you can release space back to the disk).

    Make sure your weekend ful backups have compression and you split the backup file into smaller chunks to minimize impact on disk size and network bandwidth and it is usually faster and more resource intensive (hence the weekend use when businesses are slower). Keep in mind that you will need all backup set members to do a restore so ensure you test your split backups so they are all consistent and recoverable using restore with header only or restore verify.

    Hope this helps you plan for not just this database but all other databases in your environment.

    Consider ola hallgren's maintenance solution (the backup piece) for help on the script to use.

    GOOD LUCK....:-):-):-):-)

Viewing 15 posts - 1 through 15 (of 46 total)

You must be logged in to reply to this topic. Login to reply