Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12345»»»

Transaction Log Growth, do you need it? Expand / Collapse
Author
Message
Posted Wednesday, January 7, 2009 10:33 AM
SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: 2 days ago @ 1:47 AM
Points: 5,449, Visits: 1,401
Nice one...


Post #631663
Posted Thursday, July 16, 2009 3:48 PM


SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Thursday, November 20, 2014 8:57 PM
Points: 480, Visits: 215
I'm Triyng to autogrowth a log file. A created a DB witn one table with a couple of char 8000 fields, inserted many, many rows. But the Registry never gets 100%. Using dbcc sqlperf( logspace) the more full I saw was 87% an the the percentage used then decreases and so on.

I don't get it
Post #754525
Posted Friday, July 17, 2009 5:56 PM


SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, September 23, 2014 8:00 AM
Points: 473, Visits: 8,737
What is your recovery model set at?

It sounds like it might be set as SIMPLE. If it is, try setting it to FULL and then attempt to continously add data to the database.

If it is in SIMPLE then I'm guessing, by your description, that the log is clearing itself out before you manage to fill it up. The log, in simple recovery model, will truncate whenever you hit a checkpoint. Checkpoints can occur from many different events, such as when you backup the database.

Also, keep in mind that the log may get minimally written to depending on how data is being inserted. Example if you "bulk insert" then the log may only hold enough info to reproduce the bulk insert, as opposed to holding all the row data info. You may want to ensure, only if you are intent on filling up the log file as much as possible, to insert line by line...this is probably the most intensive action that the log keeps track of.

Keep in mind that the FULL recovery model will not truncate the transaction log until you backup the log file. In other words, if you backup the database (while using FULL recovery model) the log file will not be automatically truncated like it is in the SIMPLE recovery model. This can resolve your issue of not filling up the log file for your testing....but, do be sure to keep an eye on it and backup the log file (or even shrink it, if needed) at the end of your testing. SQL will not do this for you...even if you set it back to SIMPLE model after all your testing, you may still need to manually shrink the file down; if you don't want the final log file size you are at when completing the testing. As illustrated in my article, if the log file does become full and can no longer grow to allow new transactions then you will receive the error I mentioned.

Those are just a few immediate ideas.

Hope this helps!
James


~ Without obstacles, you cannot progress ~
http://sqln.blogspot.com/
Post #755206
Posted Wednesday, March 17, 2010 12:17 PM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Monday, October 27, 2014 11:01 AM
Points: 414, Visits: 148
Great article. Here's a question that I've been looking for an answer to. What do you use for a realistic size of a transaction log. We take t-log backups every 2 hours. On any given in-house database we generally size the t-log to be anywhere between 1/4 to 1/2 the size of the datafiles. On several out of house apps that rule won't apply. ie. we have an 8gig db that has a tlog of 192gig.

At what point do you go back to the vendor and tell them to manage the transactions and that a nominal size is less than or equal to the datafile size.

Just wondering.
Tom



Post #884942
Posted Wednesday, March 17, 2010 3:56 PM


SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, September 23, 2014 8:00 AM
Points: 473, Visits: 8,737
Tom,

To start off with an improperly sized log file such as that of which you are describing can actually hurt your performance in a magnitude that is unacceptable. You may find that resizing that TLog will increase your performance for that database by tenfold or better!

In a discussion on one of my other articles I actually provided some concrete testing that you can reproduce to see (as well as show the vendor) how an improperly sized log file affects your system. Go to: http://www.sqlservercentral.com/Forums/FindPost817363.aspx to see the testing and results; as well as the post I added immediately afterwards on how to create the testing yourself.

Now on to some things to ponder...

I'm not sure why there would be a 192gb TLog for a 8GB database.

First question: Is this actually space that is actively used?

Second question: Are backups being performed?

Third question: Does the TLog come empty to you and sized at this much?

If yes to # 1; check on regular backup schedule for TLog and adjust appropriately...or adjust recovery model if TLogs aren't being used at all. Once TLogs are getting regularly backed up, or the model is set to SIMPLe then you want to measure the TLog capacity before each backup or on a regular basis to deterimine how much space is actually required. I'd then resize it to that plus 10-20% for spikes.

If no to # 2: Start regularly making TLog backups; or set recovery model to SIMPLE. If yes, run a script just before TLog backup to determine capacity of TLog. Store the results to a table so you can create a historical record. After some time then review and adjust size as in above scenario.

If yes to # 3: Tell the vendor this is ridiculous to come that size! There's almost certainly no way a TLog could ever use that much space in running transactions. They (the vendor) needs to seriously discuss this with their DBA's to ensure this isn't continuing to happen. To require a client to have 200gb of space available for a DB that ultimately takes up 8gb of space is unacceptable in any shops definition. You don't by a 10,000 square foot house and only live in the garage...do you?? People do the opposite, use garage to store junk/vehicles and live in house...database is same way.

Also show them the results of timing of queries if you can get some. 1 set of times using their 192 GB of space and 1 of yours with say 4 GB of TLog space. You'll blow them away in time, and in how silly they are for not figuring out this gaping black whole for time!

I can't even imagine what the backup time is for that TLog, even with only 1% capacity used it's got to be minutes instead of seconds...and just even that much more time with more space used. You might also even do a time caparison of how much productivity ( and CPU resources ) you are loosing because of this serious misjudgement on their part. Even if they come back and say "we just want to ensure you don't run into TLog problems" you can refute "that's great, but what about the IO problems you are causing me when I back this up??"...and if you ever use this on a SIMPLE model...that will even further hurt your IO!!

I hope this gets you started. Feel free to PM me and I can discuss some other ways to go about this.

Thanks,
James


~ Without obstacles, you cannot progress ~
http://sqln.blogspot.com/
Post #885087
Posted Monday, March 29, 2010 9:23 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Thursday, November 20, 2014 1:21 PM
Points: 5, Visits: 122
Most of the issues that I see on Transaction Log File growth are due to the recording of index rebuilds. Meaning, with the FULL recovery model, whenever there is an index rebuild, those rebuild transactions are recorded. Depending on how many indexes are being rebuilt, this can be millions of transactions. Recording these transactions is completely unnecessary IMO as no one would ever restore to a point in the middle of an index rebuild. Also, one can simply rebuild an index at any time if necessary.

Our LDF file used to be 2 to 3 times the size of our DB which is 50GB. Since I am not a fan of the endless Shrink/Grow cycle that I see people using (causes disk fragmentation and doesn't really solve any disk space dilemmas), I have since changed my index rebuild schedule and now also switch to SIMPLE recovery mode prior to Index Rebuilds to prevent these rebuild transactions from being recorded in the LDF file. I then switch back to FULL recovery mode following the rebuild (with backups before and after the switch to keep the restore chain intact).

Since doing this, our LDF never grows beyond 500mb (yes, that's MB not GB). I have not run into any issues since incorporating this method.

Does anyone see any potential issues with this strategy??
Post #891953
Posted Friday, April 9, 2010 4:00 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Monday, October 27, 2014 11:48 PM
Points: 180, Visits: 529
One thing that is not clear to me is about setting/modifying the log file size, say maybe for in this case you have a database which growths on daily bases, for the sake of storage space what is the actual difference if you enable autogrowth as compared to using a fixed size. BulkLog insert and Simple which one can you use on log files settings.
Post #900302
Posted Friday, April 9, 2010 8:59 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Monday, October 27, 2014 11:01 AM
Points: 414, Visits: 148
Thanks for the replies.
We take t-log backups every 2 hours, in full recovery mode. This vendor has been more difficult then most to deal with when it comes to their processing. We do index rebuilds nightly, but has no real effect on the t-logs. To say, after shrinking and running the process we don't have 192 gig log file, or anywhere near that. It has been steady at 8 gig for about the last month. So we'll keep our fingers crossed and see what happens next.

Thanks
Tom



Post #900625
Posted Friday, April 16, 2010 9:26 AM


SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Monday, August 25, 2014 8:35 PM
Points: 187, Visits: 434
Excellent article.

What do you recommend when it comes to maintainence tasks such as a regularly schedule reorganization on tables?

Whenever the maintenance task runs the tlogs fill very fast and exceed the size of the mdf 's. I set up the task to help improve performance.

Thanks in advance for any replies.

Keith
Post #904971
Posted Friday, April 30, 2010 7:37 AM


SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Monday, August 25, 2014 8:35 PM
Points: 187, Visits: 434
Kevin,

Do you do this in your maintenance plan or are you running a job outside of your maintenance plan? This sounds like the best option and I would like to implement a similar practice. Thanks for your reply.
Post #913692
« Prev Topic | Next Topic »

Add to briefcase ««12345»»»

Permissions Expand / Collapse