Transaction Log Growth, do you need it?

  • Most of the issues that I see on Transaction Log File growth are due to the recording of index rebuilds. Meaning, with the FULL recovery model, whenever there is an index rebuild, those rebuild transactions are recorded. Depending on how many indexes are being rebuilt, this can be millions of transactions. Recording these transactions is completely unnecessary IMO as no one would ever restore to a point in the middle of an index rebuild. Also, one can simply rebuild an index at any time if necessary.

    Our LDF file used to be 2 to 3 times the size of our DB which is 50GB. Since I am not a fan of the endless Shrink/Grow cycle that I see people using (causes disk fragmentation and doesn't really solve any disk space dilemmas), I have since changed my index rebuild schedule and now also switch to SIMPLE recovery mode prior to Index Rebuilds to prevent these rebuild transactions from being recorded in the LDF file. I then switch back to FULL recovery mode following the rebuild (with backups before and after the switch to keep the restore chain intact).

    Since doing this, our LDF never grows beyond 500mb (yes, that's MB not GB). I have not run into any issues since incorporating this method.

    Does anyone see any potential issues with this strategy??

  • One thing that is not clear to me is about setting/modifying the log file size, say maybe for in this case you have a database which growths on daily bases, for the sake of storage space what is the actual difference if you enable autogrowth as compared to using a fixed size. BulkLog insert and Simple which one can you use on log files settings.

  • Thanks for the replies.

    We take t-log backups every 2 hours, in full recovery mode. This vendor has been more difficult then most to deal with when it comes to their processing. We do index rebuilds nightly, but has no real effect on the t-logs. To say, after shrinking and running the process we don't have 192 gig log file, or anywhere near that. It has been steady at 8 gig for about the last month. So we'll keep our fingers crossed and see what happens next.

    Thanks

    Tom

  • Excellent article.

    What do you recommend when it comes to maintainence tasks such as a regularly schedule reorganization on tables?

    Whenever the maintenance task runs the tlogs fill very fast and exceed the size of the mdf 's. I set up the task to help improve performance.

    Thanks in advance for any replies.

    Keith

  • Kevin,

    Do you do this in your maintenance plan or are you running a job outside of your maintenance plan? This sounds like the best option and I would like to implement a similar practice. Thanks for your reply.

  • kwoznica (4/30/2010)


    Kevin,

    Do you do this in your maintenance plan or are you running a job outside of your maintenance plan? This sounds like the best option and I would like to implement a similar practice. Thanks for your reply.

    I removed the Rebuild Indexes portion from the SQL Maintenance plan and created a separate SQL Job (weekly) that handles Rebuilding Indexes. The job has multiple steps that include taking the before and after backups, changing the recovery mode from FULL to SIMPLE (and vice versa) and rebuilding the indexes (while in SIMPLE Mode).

  • One more question, how did you extract the sql maintenance plan? When I click the show TSQL command I just see a message that states "generating tsql script, please wait"

  • This article help to understand the basic of Tlog.

  • Hi James,

    It was a very nice article.

    I just have a small doubt. If we are taking a periodical backup of the database then do we need to allow transaction logs to increase??

    Secondly, after taking the backup, should we clear the log files and allow log files to log events between the backup schedule only? This could save lot of space on the DB server.

    In case, if we need to allow log files to grow, upto what extent that should be allowed because there is a limit of space available on the server?

  • I have to say I exclusivly work with the simple recovery model myself. Over time I learned that for those, it is best to have a fixed size, but ** very ** large logfile. Before I always incremended logfiles automatically by a fixed number of MB. The problem with that is that extreme large updates can fill up the disks unexpectically. Then recovery would take hours as a consequence and generate massive amounts of I/O. This brings the server to a standstill and other databases can also get into trouble if they needed more data or log space themselfs!

    By limiting the log size I can control how bad things can spin out of hand and thus put a limit on how obese update statements are allowed to be. The logs are still ** many ** times larger then normal use would need. But now, if something spins out of control due to a bug or a careless ad hoc statement, there is a limit and recovery can happen fast without hindering other databases on the same server.

    If this solution works for you...it depends...really critical databases are maintained differently and use the full recovery model and have many transaction log backups a day. But for most small business without dedicated DBA and daily automated backups, it makes more sense to use the simple model with a single large log file.

  • Alberto De Rossi-378909 (7/16/2009)


    I'm Triyng to autogrowth a log file. A created a DB witn one table with a couple of char 8000 fields, inserted many, many rows. But the Registry never gets 100%. Using dbcc sqlperf( logspace) the more full I saw was 87% an the the percentage used then decreases and so on.I don't get it

    The default recovery model for a new database is FULL (though you can change this by modifying the model database). However, until you take a full database backup, the transaction log stays in auto-truncate mode - just as if the recovery model were SIMPLE. SQL Server does this because without a base for a restore there is no point keeping log records. Note also that you cannot backup the log before a full backup has been performed.

    When in auto-truncate mode, the inactive portion of the log is cleared whenever a CHECKPOINT is triggered and completed. One of the ways a CHECKPOINT can be triggered (in this scenario) is if the transaction log reaches 70% full. This explains why you never see the log fill up. The solution is to take a FULL database backup, and then run your tests.

    I'm not sure why you mention the registry in your post - transaction logging has nothing to do with the Windows registry.

  • Kevin Landry (3/29/2010)


    Most of the issues that I see on Transaction Log File growth are due to the recording of index rebuilds. Meaning, with the FULL recovery model, whenever there is an index rebuild, those rebuild transactions are recorded. Depending on how many indexes are being rebuilt, this can be millions of transactions. Recording these transactions is completely unnecessary IMO as no one would ever restore to a point in the middle of an index rebuild. Also, one can simply rebuild an index at any time if necessary.

    Our LDF file used to be 2 to 3 times the size of our DB which is 50GB. Since I am not a fan of the endless Shrink/Grow cycle that I see people using (causes disk fragmentation and doesn't really solve any disk space dilemmas), I have since changed my index rebuild schedule and now also switch to SIMPLE recovery mode prior to Index Rebuilds to prevent these rebuild transactions from being recorded in the LDF file. I then switch back to FULL recovery mode following the rebuild (with backups before and after the switch to keep the restore chain intact).

    Since doing this, our LDF never grows beyond 500mb (yes, that's MB not GB). I have not run into any issues since incorporating this method.

    Does anyone see any potential issues with this strategy?

    Most people choose to switch to the BULK_LOGGED recovery model in order to minimally-log qualifying index maintenance operations. The log impact of an index rebuild under the BULK_LOGGED model is the same as under SIMPLE. There are full details on minimal logging and index operations in Books Online.

    It is best to backup the log just before switching to BULK_LOGGED and just after switching back to FULL recovery, since point-in-time restore is not currently supported between those points.

    Switching to SIMPLE and back is almost never unnecessary (and is potentially dangerous). You must take a full or differential database backup to re-establish a recovery sequence, and your database is at risk until that completes.

  • Paul White NZ (5/28/2010)


    Most people choose to switch to the BULK_LOGGED recovery model in order to minimally-log qualifying index maintenance operations. The log impact of an index rebuild under the BULK_LOGGED model is the same as under SIMPLE. There are full details on minimal logging and index operations in Books Online.

    It is best to backup the log just before switching to BULK_LOGGED and just after switching back to FULL recovery, since point-in-time restore is not currently supported between those points.

    Switching to SIMPLE and back is almost never unnecessary (and is potentially dangerous). You must take a full or differential database backup to re-establish a recovery sequence, and your database is at risk until that completes.

    Hi Paul,

    I have also considered the Bulk-Logged option, but I do not see how switching to Simple is any more dangerous than using the Bulk-Logged option?

    Switching to ANY other recovery model (including Bulk) from FULL breaks the point-in-time recovery sequence. So using Bulk and Simple are equally "dangerous" in this switching scenario as both require backups before and after the switch.

  • There is one more option that you could add to your article for special circumstances. Although the simple recovery model is not generally recommended (as discussed), it can be useful for data warehouses. In our application, we separate the fact and summary tables into a separate database and maintain an archive of the incoming data files. The bulk imports are not an issue but the summarizations were putting an unnecessarily high volume of data in the transaction log.

  • James,

    There are a few things from the article I'd like to comment on directly:

    How about the times you need to grow your log? I did a recent search and had trouble finding a clear cut discussion or article about this.

    The definitive sources for transaction log information are the blogs by Paul Randal and Kimberly Tripp: http://www.sqlskills.com/BLOGS/PAUL/category/Transaction-Log.aspx and

    http://www.sqlskills.com/BLOGS/KIMBERLY/category/Transaction-Log.aspx

    Another reason is ability to recover your database; if the log is full you can't recover from a database failure. The Database Engine will mark the Database as RESOURCE PENDING.

    I think you mean RECOVERY_PENDING here.

    Now, while it is just possible that automatic recovery might not be able to run on a database with a completely full transaction log in some extreme circumstances, this is not usually the case since SQL Server conservatively reserves space in the log to allow for extra space needed during recovery.

    In any case, it is hard to see how you justify saying "you can't recover from a database failure". It is normally possible to arrange things so that recovery can continue. Even if the database is physically damaged, we have our backups, and it is most often possible to backup the tail of the log to achieve full recovery to the point of failure.

    Paul

Viewing 15 posts - 16 through 30 (of 71 total)

You must be logged in to reply to this topic. Login to reply