Transaction Log Growth, do you need it?

  • Comments posted to this topic are about the item Transaction Log Growth, do you need it?

    ~ Without obstacles, you cannot progress ~
    http://sqln.blogspot.com/

  • James,

    Well done! an excellent article...

    Mark

  • Multiple log files: I thought the recommendation was one log file per cpu, for performance?

  • Indianrock (1/5/2009)


    Multiple log files: I thought the recommendation was one log file per cpu, for performance?

    That's recommended for the data-files of the TempDB... (also keep them the same size)

    ** Don't mistake the ‘stupidity of the crowd’ for the ‘wisdom of the group’! **
  • Oh, of course. I made my comment before the coffee was ready, sorry.

  • Good article. One thing to add, if running SQL2K5 or later on Windows 2003 or later, enable instance file initialization. Doing this will minimize the impact of growing the log or data file, especially for larger growth settings.

    David

  • This article indicates Instant File Initialization is only for data files.

    http://www.microsoft.com/technet/prodtechnol/sql/2005/physdbstor.mspx

    Instant File Initialization

    SQL Server 2005 added support for the Windows instant file initialization feature. By default, when a file is created or grows larger, the pages in that file are initialized by writing zeros before the file gets used. This overwrites any existing data that remains on the disk. Instant initialization is only used for data files (not log files) and is enabled when the account running SQL Server has the Windows SE MANAGE VOLUME NAME privilege, which is available only on Microsoft Windows XP, Windows Server 2003 or later versions. This occurs in five scenarios:

    During file creation

    CREATE DATABASE, including tempdb creation at server startup.

    RESTORE DATABASE

    During file modification

    ALTER DATABASE...MODIFY FILE.

    Modifications that result in autogrow activity.

  • Good job, James. A nice explanation that I'll have to remember to run-run periodically.

    I thought I heard last year that there's no great benefit to multiple log files, except space management. I wish I had a reference for that now.

  • This article indicates Instant File Initialization is only for data files.

    That was an oversight on my part, you are correct. To much R&R over the holiday I guess. :hehe:

    David

  • Steve, and everyone else...

    Thanks for the great compliments!

    Yes, as mentioned in my article, there is no performance benefit to multiple log files. It seems the only benefit is file management for disk space and other administrative tasks where you would want the log file in a smaller size (i.e. disk mirroring, or limited hard drive space).

    BOL: Adding and Deleting Data and Transaction Log Files (http://msdn.microsoft.com/en-us/library/ms191433.aspx), says "SQL Server uses a proportional fill strategy across all the files within each filegroup and writes an amount of data proportional to the free space in the file. This enables the new file to be used immediately. In this way, all files generally become full at about the same time. However, transaction log files cannot be part of a filegroup; they are separate from one another. As the transaction log grows, the first log file fills, then the second, and so on, by using a fill-and-go strategy instead of a proportional fill strategy. Therefore, when a log file is added, it cannot be used by the transaction log until the other files have been filled first."

    This is a solid indication that there is no benefit to performance when using multiple transaction log files.

    Thanks,

    James

    ~ Without obstacles, you cannot progress ~
    http://sqln.blogspot.com/

  • Nice one...

  • I'm Triyng to autogrowth a log file. A created a DB witn one table with a couple of char 8000 fields, inserted many, many rows. But the Registry never gets 100%. Using dbcc sqlperf( logspace) the more full I saw was 87% an the the percentage used then decreases and so on.

    I don't get it

    Alberto De Rossi
    Microsoft Certified Solutions Associate - SQL Server
    Microsoft MVP - Data Platform
    Power BI User Group Lima - Community Leader

  • What is your recovery model set at?

    It sounds like it might be set as SIMPLE. If it is, try setting it to FULL and then attempt to continously add data to the database.

    If it is in SIMPLE then I'm guessing, by your description, that the log is clearing itself out before you manage to fill it up. The log, in simple recovery model, will truncate whenever you hit a checkpoint. Checkpoints can occur from many different events, such as when you backup the database.

    Also, keep in mind that the log may get minimally written to depending on how data is being inserted. Example if you "bulk insert" then the log may only hold enough info to reproduce the bulk insert, as opposed to holding all the row data info. You may want to ensure, only if you are intent on filling up the log file as much as possible, to insert line by line...this is probably the most intensive action that the log keeps track of.

    Keep in mind that the FULL recovery model will not truncate the transaction log until you backup the log file. In other words, if you backup the database (while using FULL recovery model) the log file will not be automatically truncated like it is in the SIMPLE recovery model. This can resolve your issue of not filling up the log file for your testing....but, do be sure to keep an eye on it and backup the log file (or even shrink it, if needed) at the end of your testing. SQL will not do this for you...even if you set it back to SIMPLE model after all your testing, you may still need to manually shrink the file down; if you don't want the final log file size you are at when completing the testing. As illustrated in my article, if the log file does become full and can no longer grow to allow new transactions then you will receive the error I mentioned.

    Those are just a few immediate ideas.

    Hope this helps!

    James

    ~ Without obstacles, you cannot progress ~
    http://sqln.blogspot.com/

  • Great article. Here's a question that I've been looking for an answer to. What do you use for a realistic size of a transaction log. We take t-log backups every 2 hours. On any given in-house database we generally size the t-log to be anywhere between 1/4 to 1/2 the size of the datafiles. On several out of house apps that rule won't apply. ie. we have an 8gig db that has a tlog of 192gig.

    At what point do you go back to the vendor and tell them to manage the transactions and that a nominal size is less than or equal to the datafile size.

    Just wondering.

    Tom

  • Tom,

    To start off with an improperly sized log file such as that of which you are describing can actually hurt your performance in a magnitude that is unacceptable. You may find that resizing that TLog will increase your performance for that database by tenfold or better!

    In a discussion on one of my other articles I actually provided some concrete testing that you can reproduce to see (as well as show the vendor) how an improperly sized log file affects your system. Go to: http://www.sqlservercentral.com/Forums/FindPost817363.aspx to see the testing and results; as well as the post I added immediately afterwards on how to create the testing yourself.

    Now on to some things to ponder...

    I'm not sure why there would be a 192gb TLog for a 8GB database.

    First question: Is this actually space that is actively used?

    Second question: Are backups being performed?

    Third question: Does the TLog come empty to you and sized at this much?

    If yes to # 1; check on regular backup schedule for TLog and adjust appropriately...or adjust recovery model if TLogs aren't being used at all. Once TLogs are getting regularly backed up, or the model is set to SIMPLe then you want to measure the TLog capacity before each backup or on a regular basis to deterimine how much space is actually required. I'd then resize it to that plus 10-20% for spikes.

    If no to # 2: Start regularly making TLog backups; or set recovery model to SIMPLE. If yes, run a script just before TLog backup to determine capacity of TLog. Store the results to a table so you can create a historical record. After some time then review and adjust size as in above scenario.

    If yes to # 3: Tell the vendor this is ridiculous to come that size! There's almost certainly no way a TLog could ever use that much space in running transactions. They (the vendor) needs to seriously discuss this with their DBA's to ensure this isn't continuing to happen. To require a client to have 200gb of space available for a DB that ultimately takes up 8gb of space is unacceptable in any shops definition. You don't by a 10,000 square foot house and only live in the garage...do you?? People do the opposite, use garage to store junk/vehicles and live in house...database is same way.

    Also show them the results of timing of queries if you can get some. 1 set of times using their 192 GB of space and 1 of yours with say 4 GB of TLog space. You'll blow them away in time, and in how silly they are for not figuring out this gaping black whole for time!

    I can't even imagine what the backup time is for that TLog, even with only 1% capacity used it's got to be minutes instead of seconds...and just even that much more time with more space used. You might also even do a time caparison of how much productivity ( and CPU resources ) you are loosing because of this serious misjudgement on their part. Even if they come back and say "we just want to ensure you don't run into TLog problems" you can refute "that's great, but what about the IO problems you are causing me when I back this up??"...and if you ever use this on a SIMPLE model...that will even further hurt your IO!!

    I hope this gets you started. Feel free to PM me and I can discuss some other ways to go about this.

    Thanks,

    James

    ~ Without obstacles, you cannot progress ~
    http://sqln.blogspot.com/

Viewing 15 posts - 1 through 15 (of 71 total)

You must be logged in to reply to this topic. Login to reply