Backup Time for 2 TB DB

  • I am using SQL 2008 R2 before I start backup want to know how long it will take to do full backup for 2 TB database (Aprrox time to backup)

    72 GB of Ram dual CPU 2 processor

  • Is this with native backup? Where you backing up,means backup server or same server?

    Thanks

  • Why can't you just run a backup and see how long it takes?

  • a lots of facitor,

    1) are you taking backup on local drive or on network ?

    2) how fast is the disk and is this is dedicated or not ?

    3) are other background processes runnig or not ?

    3) how big is the RAM and other resources like CPU , buses etlc etc

    -------Bhuvnesh----------
    I work only to learn Sql Server...though my company pays me for getting their stuff done;-)

  • Michael Valentine Jones (1/31/2013)


    Why can't you just run a backup and see how long it takes?

    That is probably the best advice.

    While you are waiting for a backup to finish, you can estimate the time. For example, if your network/storage throughput is 50MB/s sequential write with one thread, one queue. You turned on backup compression and compression ratio is e.g. 2x.

    So, 2 000 000 MB / 2 compression ratio / 50 MB/s = 20 000 seconds = about 6 hours.

    You can:

    - parallelize backup e.g. over 10 separate NIC-s and achieve e.g. 30 minute.

    - reorganize some static archive tables/partitions into READ-ONLY marked filegroups. Backup those filegroups once and never more, just regularly check is the backup still readable. Rest of the db that is read-write you need to backup, but it will be much smaller than 2TB.

    - use differential backup daily. Full backup not so often. Dif backup is much smaller and much faster than full backup, but to restore you also need last full backup file.

    _____________________________________________________
    Microsoft Certified Master: SQL Server 2008
    XDetails Addin - for SQL Developers
    blog.sqlxdetails.com - Transaction log myths
  • And backup is an operation that lets you see what percentage complete it is if you look at the running process in sys.dm_exec_requests.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Use this code (as Grant mentioned, from dm_exec_requests) - it's pretty accurate and will give you a rough idea of when it will complete once you let it run for a while...USE MASTER

    SELECT

    SESSION_ID [spid] ,

    '[' + CAST( DATABASE_ID AS VARCHAR(10 )) + '] ' + DB_NAME (DATABASE_ID) AS [DB],

    PERCENT_COMPLETE [%] ,

    START_TIME [Started] , STATUS [Sts], COMMAND [Cmd] ,

    DATEADD(MS , ESTIMATED_COMPLETION_TIME, GETDATE()) AS [Est.Comp.Time] ,

    CPU_TIME [CPUTime]

    FROM SYS .DM_EXEC_REQUESTS

    WHERE

    COMMAND LIKE '%BACKUP%'

    OR COMMAND LIKE '%RESTORE%'

    OR COMMAND LIKE '%DBCC%'

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • Also Bhuvnesh and Vedran have made excellent points. To give you a rough idea, I have nearly the same size DB, I use Idera's SQL Safe software (backup compression) and it takes just over an hour to back up the database. Using Red Gate's Hyperbac software (and not altering the config file) it used to take 5+ hours. Without any compression at all...it would take over 8+

    Due to the length of time it took in the past, I opted for a full backup every 2 days, with differentials running every 6 hours, with TLOG backups running every 15 minutes. In my situation, this works for us...

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • As all mentioned in my view my backup of a database size is 109 Gb approximately takes one hour then around 1 TB will take nearly 22 hours

    and then 2 Tb might take nearly 45 to 46 hours if there is no disturbances like network failure, backup job failure, disk space issue

    and my suggestion if it u have device backups then it is good to split the backup and run in this cases a single backup file might fail.then all the work time effort might be waste.

    If it is a device backup even if u run the device backup through split backup maintenance plan jobs it is easy to identify the backup job number easy

    Thanks
    Naga.Rohitkumar

  • I used Litespeed on a 2005/2008 R2 SQL Cluster. Like many mentioned a lot of factors, disk IO being a major part. Native backup would have never worked out for me, too slow... I backed up 1.5TB database with about 80% compression using litespeed in just over 1 hr. It's faster with lighter compression. Litespeed also has a backup analysis tool that will simulate backup of 50% of your DB size with various compression and give you time and compression rate info.

    The storage was an Equalogic Fibre San, RAID10, 4 data LUN, 1 TLog Lun, I think about 30 disks.

    Disk IO has always been the bottleneck for me so beefy server is great but beefy storage is better, and by beefy storage I mean lots and lots of spindles...

    ---------------------------------------------------------------
    Mike Hahn - MCSomething someday:-)
    Right way to ask for help!!
    http://www.sqlservercentral.com/articles/Best+Practices/61537/
    I post so I can see my avatar :hehe:
    I want a personal webpage 😎
    I want to win the lotto 😀
    I want a gf like Tiffa :w00t: Oh wait I'm married!:-D

  • Here is one example of highly-optimized VLDB backup:

    http://sqlcat.com/sqlcat/b/whitepapers/archive/2009/08/13/a-technical-case-study-fast-and-reliable-backup-and-restore-of-a-vldb-over-the-network.aspx

    But, even without additional hardware costs, you may achieve significant improvements by using techniques already described (diff backup, compression...).

    You may also want to check your disk system characteristics with sqlio, and determine the optimal number of files to maximize total IO throughput (can be significally, order of magnitude better):

    http://www.brentozar.com/archive/2008/11/storage-performance-testing-with-sqlio/[/url]

    Single business-class disk reads and writes at least 100 MB/s (http://www.tomshardware.com/charts/enterprise-hard-drive-charts-2010/benchmarks,105.html). If your expensive super-duper 50 disk subsystem gives you less performance than one disk, it is probably misconfigured or you have a network bottleneck - contact your SAN administrators.

    _____________________________________________________
    Microsoft Certified Master: SQL Server 2008
    XDetails Addin - for SQL Developers
    blog.sqlxdetails.com - Transaction log myths
  • Thanks everyone

    Nita

  • Many backup products allow you to backup to multiple files using multiple threads. The bacukp gets striped across all the files and you need them all available for a restore. However, this technique can significantly reduce total backup time.

    Another technique is to distribute your database over multiple Filegroups and use Filegroup backup. You can then run a full backup of Filegroup 1 on day 1 along with diff backups of the other Filegroups. On day 2 take a full backup of Filegroup 2 along with diff backups of the other Filegroups. Eventually you will have a full backup of all filegroups that you can use for a restore. I saw this in use in 2001 for a 25TB database, and at that size and the slow kit around then it was the only way they could get a daily backup to complete within 24 hours.

    As other people have said, enabling compression during the backup process normally reduces the total time needed for the backup.

    However, you need to start your backup design with the classic SLA items Time to Recovery and Data Loss Time. If you do not have these as anchors in your design, you are likely to end up with something that either falls short of business needs or is over-engineered and expensive.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply