Backup and restore solution for 800TB of data

  • Our production database on SQL 2005 has grown to a size of around 800 Terabytes of data and transaction log is around 100 TB. We have set the frequency interval of the log backup job to 10 seconds, but one execution of the log backup is taking 50 seconds to complete. We are not sure when the next backup job is happening after the start of the previous backup job. It's all messed up and systems have become too slow to the extent of hanging frequently. Can someone suggest a better backup and restore solution.

  • You're doing a 100TB log backup in 50 seconds? That's simply amazing right there.

    But the question I have is, with an 800TB database (which is about 800 times the size of the largest database I've dealt with), how is it possible that you have a 100tb log? That means you're moving 1/8 of the data through the log at any one time. On any scale system, that's excessive.

    I'm excited to see what people have to say about this one.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • That would mean less than 10 minutes to move a PetaByte of information.

    I was amazed to recently see a TeraByte moved in 30 minutes.

    The older I get, the faster technoolgy seems to move.

    Could you share an outline of the hardware to do this?

    Something does seem a bit odd - can you verify the size of the db?

    And can you tell us the size 24 hours ago? a week ago?

    Greg E

  • Actually over the couple of years, the db size has grown from few tens of GBs to such a huge size. At the time of design, we have kept the settings as 50 seconds for worst case scenario. We can't afford to increase this time now. And there are a lot of applications depending on the database doing high volume transactions that tlog size is also getting unmanageable..Please suggest a different backup solutions available.

    By the way, you mean TLog can't be 100TB for this database ?

  • I didn't say it "can't" be 100tb, I said it probably shouldn't be. You can have a tlog hundreds of times the size of your database, but that doesn't make it right.

    What I'm saying is, it's very abnormal for a transaction log to represent such a large percentage of the data in the database itself. That means that you have up to 100 terabytes of transactions where the data is only 800 terabytes. You can replace the tb with gb or mb or b, and it still sounds problematic. It means 1/8 of your total data is being transacted upon, based on your issue, within 50 seconds. Again, highly unusual. But then you're into the range of unusual.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • It can be 800TB, but that is huge. Are you sure you're not mistaking the size by a factor of 10?

    Also, I'm curious how you are backing up the log every 10 sec. Are you using some external program. AFAIK, jobs to schedule can be done every minute, not more granular than that.

    Maybe you can post the output from

    select * from sys.database_files

  • Are you really sure it's 800TB and not 800GB?

    SQL 2008 only support 512TB Databases

    http://msdn.microsoft.com/en-us/library/ms143432.aspx

    So does SQL 2005

    http://msdn.microsoft.com/en-us/library/ms143432(SQL.90).aspx

    /* ----------------------------- */
    Tochter aus Elysium, Wir betreten feuertrunken, Himmlische, dein Heiligtum!

  • Get a professional in. There are going to be very few people with experience in databases over a few TB and an 800TB has to be one of the biggest SQL Server databases around.

    I'd suggest approaching Microsoft and seeing if you can get the CAT (customer advisory team) people involved. Or contact Paul Randal and see if he'd be available for some consulting

    With a database this big, this active and as important as you indicate, you should be looking at getting the absolute top people in to do a professional and comprehensive job. Relying on free help for something like this is recklessness at best.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Must be using multiple t-log files since the max size of the t-log is 2 TB.

  • According to max specifications of SQL Server how it can be log file 100 TB ( max spec. 2TB)! ? !

    ============================================================
    SELECT YOUR PROBLEM FROM SSC.com WHERE PROBLEM DESCRIPTION =
    http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]

  • Are you sure it's TB, not GB? A log file cannot be bigger than 2TB.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • If this turns out to be gigs, I'm going to cry. I was terribly excited about an 800TB system.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey (4/29/2010)


    If this turns out to be gigs, I'm going to cry. I was terribly excited about an 800TB system.

    Me too.;-)

  • My bet is on GB's not TB.

    -------------------------------------------------------------
    "It takes 15 minutes to learn the game and a lifetime to master"
    "Share your knowledge. It's a way to achieve immortality."

  • Aside from the fact that the largest size of a DB os 500TB, assuming you even had something that large.

    How Many:

    Teamed Network Cards?

    HBA Controllers?

    Drives (Max MDF file is on 16TB)

    What server could handle 25 1GB network cards and 50 controller's?

    How much total SAN space does the enterprise have? Few places have 1 PB of SAN Space just for data, let alone backups.

    It could not be backed up over copper, fibre would still be very time consuming, even with the best router?

    You could never DR it, the amount of time to logship/mirror multiple T3's would be very long...plus you would need that much space at the DR side.

    I'm sure it's 800GB, but it's interesting to think of it it could be 800TB. However, I think given a max DB size of 500TB, he has to mean 800GB

    /* ----------------------------- */
    Tochter aus Elysium, Wir betreten feuertrunken, Himmlische, dein Heiligtum!

Viewing 15 posts - 1 through 15 (of 59 total)

You must be logged in to reply to this topic. Login to reply