Transaction Log file size running away

  • I am currently working on a system where other people have set up database backups and transaction file backups.

    Now the poroblem is that the transaction log files are just running amok. I am seeing 100s of Gigs of space being used for transaction log files.

    sooner or later I am going to have to come up with a strategy for this. There are several SQL servers involved.

    1.The data warehouse has simple recovery mode so I dont believe this will have an issue as the database is backed up weekly before a load is performed.

    2. The transactional systems have full recovery mode and there is npo attempt to shrink or truncate the the file.

    I have ready many an article. If you were to set up a SQL server database with full recovery mode how would you control the growth of the transactional log file as ours never seem to get smaller they just grow until the disk is full!

    All help gratefully received.

    Ells.

  • Hi Ells,

    you need to consider what is happening on the system before you build up the backup strategy.

    You say there is a datawarehouse running in simple recovery mode, that should not have large/continous log file growth - depending on what exactly is happening there. The logfile may well grow when data is being transformed, even in simple mode, although should happen far less than in full recovery as the transactions are commited to the datafile (almost) instantly.

    The databases in full recovery mode are probably the "problem" here. What you need to realise, is that the log file has grown to a large size for a reason. Either the tran-log was not backed up for a long time at some point in the past and has since never been shrunk, or you have large amounts of data being changed/created between backups.

    The first step would be to review the current backup timing and see if the backups can be performed more often. You could then take a look at the data being created. What is a "normal" amount of data for a set period of time? Use this as a factor when resizing the tran-log.

    Please read this blog from Kim Tripp: http://www.sqlskills.com/blogs/kimberly/post/8-Steps-to-better-Transaction-Log-throughput.aspx She knows what she is talking about!! Her entire blog is chock-full with brilliant information.

    That should help you out for starters. If I can help any further let me know.

    Regards

    GermanDBA

    Regards,

    WilliamD

  • Gail explained the basics very well at http://www.sqlservercentral.com/articles/64582/

    btw you need to take log-backups to clean up the log files content !

    If you don't, they will be growing on and on until the disk is full.

    You can use sqlserver events to monitor file growth.

    File growth is also recorded in the default trace of sql2005.

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Many Thanks,

    I have read Gails post and very informative it is. I will look at the other blog. I have a feeling that there are no trasaction log backups being taken. I am just trying to understand the link between log and database backups.

    My initial thought is that after a succesfull database back up there should be a log backup wuth truncate only. Now I am not sure if this will release space in the log file (make it smaller) or do I need to finish it off with a shrink?

    Many Thanks.

    Ells.

  • Hi Ells,

    in full recovery mode the changes that are made to a database are stored in the transaction log file(s) and are not commited to the database file until you run a transaction log backup or a full backup of the database. If you are not currently doing transaction log backups, the transaction log can and will fill up between the full backups.

    If your system can handle the dataloss possibilities of simple recovery mode, you should take this option to reduce your log file woes. I would suggest that you read up and understand the recovery modes and see if full recovery mode is needed or not, because at the moment your system is in full recovery mode, but being backed up in the same way as a system using simple recovery. This is causing overhead which may not be necessary for you.

    regards

    GermanDBA

    Regards,

    WilliamD

  • Ells (1/5/2009)


    Many Thanks,

    I have read Gails post and very informative it is. I will look at the other blog. I have a feeling that there are no trasaction log backups being taken. I am just trying to understand the link between log and database backups.

    My initial thought is that after a succesfull database back up there should be a log backup wuth truncate only. Now I am not sure if this will release space in the log file (make it smaller) or do I need to finish it off with a shrink?

    Many Thanks.

    Ells.

    If you want to support point in time recoveries, you need to make log backups !

    Don't use "truncate only", because that will make you log unusable for PIT-restores !

    SQLServer does not shrink the log file during log backups. It just releases log entries , and so creates available space within the file.

    So you'll have to shrink the log file yourself at a well planned moment.

    (you may even have to repeat this script because it can only release space at the end part of a file.)

    THIS SCRIPT IS NOT INTENDED FOR SCHEDULED USAGE !! READ BOL and the urls !!

    -- Shrink_TrxLog.SQL

    --INF: How to Shrink the SQL Server 7.0 Transaction Log

    -- SQL7 http://support.microsoft.com/support/kb/articles/q256/6/50.asp?id=256650&SD

    -- SQL7 http://www.support.microsoft.com/kb/256650

    -- SQL2000 http://support.microsoft.com/kb/272318/en-us

    -- SQL2005 http://support.microsoft.com/kb/907511/en-us

    -- select db_name()

    -- select * from sysfiles

    -- THIS SCRIPT IS NOT INTENDED FOR SCHEDULED USAGE !! READ BOL and the urls !!

    SET NOCOUNT ON

    DECLARE @LogicalFileName sysname,

    @MaxMinutes INT,

    @NewSize INT

    -- *** MAKE SURE TO CHANGE THE NEXT 3 LINES WITH YOUR CRITERIA. ***

    SELECT @LogicalFileName = 'logicalname', -- Use sp_helpfile to identify the logical file name that you want to shrink.

    @MaxMinutes = 10, -- Limit on time allowed to wrap log.

    @NewSize = 100 -- in MB

    -- Setup / initialize

    DECLARE @OriginalSize int

    SELECT @OriginalSize = size -- in 8K pages

    FROM sysfiles

    WHERE name = @LogicalFileName

    SELECT 'Original Size of ' + db_name() + ' LOG is ' +

    CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' +

    CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'

    FROM sysfiles

    WHERE name = @LogicalFileName

    CREATE TABLE DummyTrans

    (DummyColumn char (8000) not null)

    -- Wrap log and truncate it.

    DECLARE @Counter INT,

    @StartTime DATETIME,

    @TruncLog VARCHAR(255)

    SELECT @StartTime = GETDATE(),

    @TruncLog = 'BACKUP LOG ' + db_name() + ' WITH TRUNCATE_ONLY'

    -- Try an initial shrink.

    DBCC SHRINKFILE (@LogicalFileName, @NewSize)

    EXEC (@TruncLog)

    -- Wrap the log if necessary.

    WHILE @MaxMinutes > DATEDIFF (mi, @StartTime, GETDATE()) -- time has not expired

    AND @OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName) -- the log has not shrunk

    AND (@OriginalSize * 8 /1024) > @NewSize -- The value passed in for new size is smaller than the current size.

    BEGIN -- Outer loop.

    SELECT @Counter = 0

    WHILE ((@Counter < @OriginalSize / 16) AND (@Counter < 50000))

    BEGIN -- update

    INSERT DummyTrans VALUES ('Fill Log') -- Because it is a char field it inserts 8000 bytes.

    DELETE DummyTrans

    SELECT @Counter = @Counter + 1

    END -- update

    EXEC (@TruncLog) -- See if a trunc of the log shrinks it.

    END -- outer loop

    SELECT 'Final Size of ' + db_name() + ' LOG is ' +

    CONVERT(VARCHAR(30),size) + ' 8K pages or ' +

    CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'

    FROM sysfiles

    WHERE name = @LogicalFileName

    DROP TABLE DummyTrans

    PRINT '*** Perform a full database backup ***'

    SET NOCOUNT OFF

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • GermanDBA (1/5/2009)


    in full recovery mode the changes that are made to a database are stored in the transaction log file(s) and are not commited to the database file until you run a transaction log backup or a full backup of the database.

    Absolutely not true.

    In all recovery models, changes are first made to the transaction log (on disk) and to the data pages in memory. Once the records are hardened in the log, the modification is considered complete. At a later time, the checkpoint process will run and will write all dirty data pages to disk. That will happen seconds or minutes after the transaction has completed.

    Once the checkpoint process has written the data pages for a particular transaction to disk, the log records associated with that transaction are considered inactive (not required for database recovery). It's only at this point that things change depending on the recovery model.

    In full recovery, those inactive log records remain in the transaction log until a log backup is run.

    In simple recovery, the inactive log records are discarded as part of the checkpoint process.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Ells (1/5/2009)


    My initial thought is that after a succesfull database back up there should be a log backup wuth truncate only. Now I am not sure if this will release space in the log file (make it smaller)

    Truncate will discard log records and make the space available for reuse. If you want the log file smaller (as a once-off operation), shrink the file.

    Do the truncate before the full backup, not afterwards. The full backup will restart the log chain

    That will fix the current situation, but if you don't change something, things will be right back to where they are now in a few weeks.

    Do you need point-in-time recovery of that database in the case of a failure, or is restoring to the last full backup acceptable?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Many Thanks.

    I am keeping an eye on the size of the logs and knoe I understand my options I will be looking at what we should do.

    Thanks,

    Mark.

  • Ells (1/5/2009)


    1.The data warehouse has simple recovery mode so I dont believe this will have an issue as the database is backed up weekly before a load is performed.

    under simple recovery the transaction log is automatically truncated at backup time. If you're only backing up once a week then that could be your issue. Make regular backups even in simple recovery

    from BOL

    Under the simple recovery model, the transaction log is automatically truncated after each data backup, that is, the inactive log is dropped. Regular log truncation eliminate transaction log backups. This simplifies backup and restore. However, without transaction log backups, recovering to the point of failure is not possible.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle (1/6/2009)[hr

    under simple recovery the transaction log is automatically truncated at backup time. If you're only backing up once a week then that could be your issue. Make regular backups even in simple recovery

    from BOL

    Under the simple recovery model, the transaction log is automatically truncated after each data backup, that is, the inactive log is dropped. Regular log truncation eliminate transaction log backups. This simplifies backup and restore. However, without transaction log backups, recovering to the point of failure is not possible.

    Perry, I am unsure what version of BOL you got that from, but it can't be correct! As I understand it, when a Database is set to a Simple Recovery Model the Transaction Log is Truncated after each checkpoint. This is why it was called "Truncate Log on Checkpoint" in SQL 6.5 and I believe 7.0.

    I found this in BOL 2008 and this is how I have always understood it to work:

    http://msdn.microsoft.com/en-us/library/ms189275.aspx

    The simple recovery model minimizes administrative overhead for the transaction log, because the transaction log is not backed up. The simple recovery model risks significant work-loss exposure if the database is damaged. Data is recoverable only to the most recent backup of the lost data. Therefore, under the simple recovery model, the backup intervals should be short enough to prevent the loss of significant amounts of data. However, the intervals should be long enough to keep the backup overhead from affecting production work. Including differential backups in the backup strategy can help reduce the overhead.

    Generally, for a user database, the simple recovery model is useful for test and development databases or for databases containing mostly read-only data, such as a data warehouse. The simple recovery model is inappropriate for production systems where loss of recent changes is unacceptable. In such cases, we recommend using the full recovery model.

    Having said that, Elis it is imperative that you backup early and often and at a number of different levels. Gail is correct that you need to figure out what is happening here, but in any case your backups should be running with increased frequency until you have determined the cause of the issue. I'm thinking job security here!

    Regards, Irish 

  • BOL sql2005 Dec2008 is the version

    as i understood it, recovery interval option will make regular log truncations by issuing checkpoint, but in severe cases where transactions over run the checkpoint routine it is still possible for the log to fill up. Indeed if the end portion of the log is active it still will not truncate past that.

    Once the log fills truncate on checkpoint can no longer occur. Simple recovery will automatically clear the log at each backup but again not past active portions!

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle (1/7/2009)


    BOL sql2005 Dec2008 is the version

    as i understood it, recovery interval option will make regular log truncations by issuing checkpoint, but in severe cases where transactions over run the checkpoint routine it is still possible for the log to fill up. Indeed if the end portion of the log is active it still will not truncate past that.

    Once the log fills truncate on checkpoint can no longer occur. Simple recovery will automatically clear the log at each backup!

    topic "Backup Under the Simple Recovery Model " also states:

    Note:

    Under the simple recovery model, the transaction log is automatically truncated to remove any inactive virtual log files. Truncation usually occurs after each checkpoint but can be delayed under some conditions. For more information, see Transaction Log Truncation.

    So the earliest point of clearing log entries is at system checkpoint time.

    (i.e. like most of us understood it worked)

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • ALZDBA (1/7/2009)


    Truncation usually occurs after each checkpoint but can be delayed under some conditions

    this is the key here, it should and generally does but there can be instances when it doesn't happen.

    The advice also states

    Therefore, under the simple recovery model, the backup intervals should be short enough to prevent the loss of significant amounts of data. However, the intervals should be long enough to keep the backup overhead from affecting production work. Including differential backups in the backup strategy can help reduce the overhead

    as backup also issues the log truncation. Truncation occurs at a few different points (sql server shutdown for example)

    am i reading too much into it?

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Okay, I agree that a full database backup will also perform a truncation of the log. I did not intend to indicate that it does not.

    The only time I would use a Simple recovery model would be used is when the Database is Read-only, static, or developmental and the business model states that development transactions do not need to be tracked. In pretty much any other case I would use a full recovery model with backups occurring on a scheduled basis. The backup schedule is dependant on the volatility of the data. Even if the database is read-only or unchanging I would take a full backup at least daily and keep multiple copies in different locations. (I have been burned before)

    I find in most cases it is better to err on the side of caution. That includes checking to ensure that the scheduled backups completed in a timely manner, whether or not I have notifications setup or not.

    Regards, Irish 

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply