Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12345»»»

Backup and restore solution for 800TB of data Expand / Collapse
Author
Message
Posted Thursday, April 29, 2010 6:05 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Tuesday, July 8, 2014 2:02 AM
Points: 70, Visits: 307
Our production database on SQL 2005 has grown to a size of around 800 Terabytes of data and transaction log is around 100 TB. We have set the frequency interval of the log backup job to 10 seconds, but one execution of the log backup is taking 50 seconds to complete. We are not sure when the next backup job is happening after the start of the previous backup job. It's all messed up and systems have become too slow to the extent of hanging frequently. Can someone suggest a better backup and restore solution.
Post #912735
Posted Thursday, April 29, 2010 6:48 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Today @ 7:55 AM
Points: 15,662, Visits: 28,055
You're doing a 100TB log backup in 50 seconds? That's simply amazing right there.

But the question I have is, with an 800TB database (which is about 800 times the size of the largest database I've dealt with), how is it possible that you have a 100tb log? That means you're moving 1/8 of the data through the log at any one time. On any scale system, that's excessive.

I'm excited to see what people have to say about this one.


----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #912769
Posted Thursday, April 29, 2010 7:07 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 7:08 AM
Points: 1,232, Visits: 6,643
That would mean less than 10 minutes to move a PetaByte of information.
I was amazed to recently see a TeraByte moved in 30 minutes.
The older I get, the faster technoolgy seems to move.
Could you share an outline of the hardware to do this?

Something does seem a bit odd - can you verify the size of the db?
And can you tell us the size 24 hours ago? a week ago?

Greg E
Post #912798
Posted Thursday, April 29, 2010 7:08 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Tuesday, July 8, 2014 2:02 AM
Points: 70, Visits: 307
Actually over the couple of years, the db size has grown from few tens of GBs to such a huge size. At the time of design, we have kept the settings as 50 seconds for worst case scenario. We can't afford to increase this time now. And there are a lot of applications depending on the database doing high volume transactions that tlog size is also getting unmanageable..Please suggest a different backup solutions available.

By the way, you mean TLog can't be 100TB for this database ?
Post #912799
Posted Thursday, April 29, 2010 7:15 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Today @ 7:55 AM
Points: 15,662, Visits: 28,055
I didn't say it "can't" be 100tb, I said it probably shouldn't be. You can have a tlog hundreds of times the size of your database, but that doesn't make it right.

What I'm saying is, it's very abnormal for a transaction log to represent such a large percentage of the data in the database itself. That means that you have up to 100 terabytes of transactions where the data is only 800 terabytes. You can replace the tb with gb or mb or b, and it still sounds problematic. It means 1/8 of your total data is being transacted upon, based on your issue, within 50 seconds. Again, highly unusual. But then you're into the range of unusual.


----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #912809
Posted Thursday, April 29, 2010 7:17 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: Administrators
Last Login: Today @ 8:00 PM
Points: 33,204, Visits: 15,354
It can be 800TB, but that is huge. Are you sure you're not mistaking the size by a factor of 10?

Also, I'm curious how you are backing up the log every 10 sec. Are you using some external program. AFAIK, jobs to schedule can be done every minute, not more granular than that.

Maybe you can post the output from

select * from sys.database_files








Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #912811
Posted Thursday, April 29, 2010 7:26 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, April 17, 2014 4:56 AM
Points: 1,002, Visits: 884
Are you really sure it's 800TB and not 800GB?

SQL 2008 only support 512TB Databases
http://msdn.microsoft.com/en-us/library/ms143432.aspx

So does SQL 2005
http://msdn.microsoft.com/en-us/library/ms143432(SQL.90).aspx


/* ----------------------------- */
Tochter aus Elysium, Wir betreten feuertrunken, Himmlische, dein Heiligtum!

Post #912823
Posted Thursday, April 29, 2010 7:30 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 10:05 AM
Points: 42,829, Visits: 35,961
Get a professional in. There are going to be very few people with experience in databases over a few TB and an 800TB has to be one of the biggest SQL Server databases around.

I'd suggest approaching Microsoft and seeing if you can get the CAT (customer advisory team) people involved. Or contact Paul Randal and see if he'd be available for some consulting

With a database this big, this active and as important as you indicate, you should be looking at getting the absolute top people in to do a professional and comprehensive job. Relying on free help for something like this is recklessness at best.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #912826
Posted Thursday, April 29, 2010 7:32 AM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Today @ 9:04 PM
Points: 23,302, Visits: 32,057
Must be using multiple t-log files since the max size of the t-log is 2 TB.



Lynn Pettis

For better assistance in answering your questions, click here
For tips to get better help with Performance Problems, click here
For Running Totals and its variations, click here or when working with partitioned tables
For more about Tally Tables, click here
For more about Cross Tabs and Pivots, click here and here
Managing Transaction Logs

SQL Musings from the Desert Fountain Valley SQL (My Mirror Blog)
Post #912829
Posted Thursday, April 29, 2010 7:33 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Wednesday, August 27, 2014 1:51 AM
Points: 1,262, Visits: 3,422
According to max specifications of SQL Server how it can be log file 100 TB ( max spec. 2TB)! ? !

============================================================
SELECT YOUR PROBLEM FROM SSC.com WHERE PROBLEM DESCRIPTION =
http://www.sqlservercentral.com/articles/Best+Practices/61537/
Post #912830
« Prev Topic | Next Topic »

Add to briefcase 12345»»»

Permissions Expand / Collapse