Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12345»»»

Backup questions for 612GB database! Expand / Collapse
Author
Message
Posted Wednesday, April 30, 2014 1:46 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, June 10, 2014 11:18 AM
Points: 23, Visits: 106
We have a database that is over 612Gb and growing. Our backup is being done on tape using Backup Exec 2012. This process starts at 10PM and runs until 6AM every day. We do an incremental backup every 15 minutes so that we can restore up to any 15 minute windows if necessary. The nightly backup is a full backup with transaction logs so that we release the space back to the log file.
My question is this: What is the best way to perform this backup so that we can perform in in the least amount of time possible and yet make sure to keep the transaction logs clean as well. We were thinking of setting up a load balancing type of configuration but will this help with our backup jobs? One option is to convert the server into a virtual server (VMware) where more spindles are working with the database.
Any suggestions on how I can get the backups done more efficiently? I should mention that during the backup process its almost impossible to use the application that writes to the database because we get time out errors.
Post #1566591
Posted Wednesday, April 30, 2014 2:03 PM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Yesterday @ 1:14 PM
Points: 40,615, Visits: 37,081
Try backups to disk, later copied to tape if/as necessary. 620GB isn't huge, backups of that size DB shouldn't take hours.

If you're having timeouts, examine the latencies and throughput of the data drive. Could be you're swamping it with the reads that the backup does. May need more of an IO subsystem than you have.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1566600
Posted Wednesday, April 30, 2014 3:02 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Thursday, May 22, 2014 12:56 PM
Points: 10, Visits: 27
convert your server to Virtual Machine... is the best idea.
Post #1566610
Posted Wednesday, April 30, 2014 3:03 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Thursday, May 22, 2014 12:56 PM
Points: 10, Visits: 27
i'm using Snapshots with VMWare and working ok ...
Post #1566612
Posted Wednesday, April 30, 2014 3:18 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, June 10, 2014 11:18 AM
Points: 23, Visits: 106
We started out on VM but the performance was horrible. I think we have a better environment now but I am still a bit scared to move it back. As a backup point of view I can see VM being a better solution but what about database access and latency? The consulting firm is telling us that we can run with minimal memory and increased I/O but I have been burned by this before and a bit afraid to go back.
Post #1566616
Posted Wednesday, April 30, 2014 3:22 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Thursday, May 22, 2014 12:56 PM
Points: 10, Visits: 27
For Database (in Virtual Server) requires good infrastructure, for example NetApp, etc..
Post #1566618
Posted Wednesday, April 30, 2014 9:41 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, December 18, 2014 11:42 PM
Points: 1,310, Visits: 1,786
I agree with Gail, you should be backing up to a separate disk or raid group/set of spindles, disable SQL backup compression, then use Backup Exec 2012 to do data deduplication and store it on tape. I'm assuming the tape backup is the performance bottleneck.

You should first test the theoretical max backup performance by writing to the NUL device.
Adjust the BUFFERCOUNT and MAXTRANSFERSIZE (1MB) and BLOCKSIZE.

-- Try different MAXTRANSFERSIZE
BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 25, MAXTRANSFERSIZE = 524288
BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 25, MAXTRANSFERSIZE = 1048576
BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 25, MAXTRANSFERSIZE = 2097152
BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 25, MAXTRANSFERSIZE = 4194304

--Try different BUFFERCOUNT with best MAXTRANSFERSIZE
BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 10, MAXTRANSFERSIZE = 2097152
BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 40, MAXTRANSFERSIZE = 2097152
BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 100, MAXTRANSFERSIZE = 2097152

--Try different BLOCKSIZE
BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 100, MAXTRANSFERSIZE = 2097152, BLOCKSIZE = 1024
BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 100, MAXTRANSFERSIZE = 2097152, BLOCKSIZE = 16384
BACKUP DATABASE [dbname] TO DISK = 'NUL' WITH STATS = 10, BUFFERCOUNT = 100, MAXTRANSFERSIZE = 2097152, BLOCKSIZE = 65536


Once you find the max possible speed, backup to mulitple files on a raid or SSD.
Good example here:
http://henkvandervalk.com/how-to-increase-sql-database-full-backup-speed-using-compression-and-solid-state-disks

Im backing up a 250GB database in around 12 minutes. Max backup speed was 340MB/sec with Raid5 disks using NUL device and 280MB/sec with 8 backup files.





Post #1566634
Posted Thursday, May 1, 2014 7:22 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 6:20 AM
Points: 6,752, Visits: 14,400
support-555807 (4/30/2014)
We have a database that is over 612Gb and growing. Our backup is being done on tape using Backup Exec 2012. This process starts at 10PM and runs until 6AM every day.

The backup direct to tape will be the bottleneck since you're constantly waiting for the tape device which will be inherently slower than a hard disk drive. As Gail has advised, backup to disk first then let BackupExec pickup the SQL backup files to tape.
Backup compression, whether native or 3rd party, will help. There'll be a CPU hit but you will be writing less data to disk since the data is compressed.
How big is the final backup file for the database?


support-555807 (4/30/2014)
One option is to convert the server into a virtual server (VMware) where more spindles are working with the database.

This is not a VMware specific option. You can get more spindles purely by sweet talking your storage admin to provide the appropriate disk devices.
Virtualising a server can have anywhere between a 10-15% performance overhead. Virtualisation is a consolidation exercise, you can get good performance if it's done correctly but even then there is the Hypervisor overhead.


support-555807 (4/30/2014)
We started out on VM but the performance was horrible.

My point exactly!


-----------------------------------------------------------------------------------------------------------

"Ya can't make an omelette without breaking just a few eggs"
Post #1566703
Posted Thursday, May 1, 2014 7:32 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Yesterday @ 1:14 PM
Points: 40,615, Visits: 37,081
M-Hol (4/30/2014)
convert your server to Virtual Machine... is the best idea.


I disagree. There's nothing magical about VMs that makes performance problems disappear, all too often it's the opposite.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1566706
Posted Thursday, May 1, 2014 7:40 AM


Right there with Babe

Right there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with Babe

Group: General Forum Members
Last Login: 2 days ago @ 12:06 PM
Points: 752, Visits: 5,531
support-555807 (4/30/2014)
We do an incremental backup every 15 minutes so that we can restore up to any 15 minute windows if necessary. The nightly backup is a full backup with transaction logs so that we release the space back to the log file.
My question is this: What is the best way to perform this backup so that we can perform in in the least amount of time possible and yet make sure to keep the transaction logs clean as well.


One thing in your question jumped out at me, the bold bit above...

If you're looking to keep the transaction log size down, I'd start with Gails suggestion, and switch the incrementals to Log backups. This will keep your log file size down, allow point-in-time recovery, and TLog backups tend to be quick.
Post #1566714
« Prev Topic | Next Topic »

Add to briefcase 12345»»»

Permissions Expand / Collapse