Transaction Log File size not reducing after transaction log backups

  • Hi all,

    I've seen this topic discussed a million times in various blog posts and forum posts, but I still can't figure out why it's happening.

    Here is what my backup plan is currently configured as, for my server:

    Weekly on Sunday Night, at 9PM : Full Backup of all databases

    Daily on all nights except Sunday, at 9PM : Incremental Backup of all databases which have recovery mode = FULL

    Hourly, every four hours : Transaction Log Backup of all databases which have recovery mode = FULL

    In terms of how the server is being accessed, every day between 3AM and 7AM, there is a significant amount of database activity, both in terms of INSERT/UPDATE as well as SELECT statements. After that, intermittently throughout the day, there are SELECT statements being executed, however, not much in terms of INSERT/UPDATE.

    My database data files are currently sitting at 74 GB and 25 GB, while the log files are at 46 GB and 17 GB respectively.

    The transaction log backups are less than 20 MB throughout the day, with the exception of the backup occurring at 8 AM, which is between 500-1000MB.

    Why does the transaction log remain at the size it is at? If I run a script similar to the one below :

    USE [DB NAME]

    GO

    DBCC SHRINKFILE([LOG FILE NAME], 1)

    BACKUP LOG [DB NAME] WITH TRUNCATE_ONLY

    DBCC SHRINKFILE([LOG FILE NAME], 1)

    GO

    The file reduces back to the minimum size, after which I can take a full backup to restore the log chain from that point onwards - but then, if I leave it for a while, the log file continues to grow steadily until I do this operation again.

    The way I understand things, the transaction log backups should return the transaction log size back to a relatively low level, so why does this keep happening to my databases?

  • Backing up the log frees up space within the log for re-use, but it doesn't shrink a log file that has grown to accommodate transaction activity. If the log keeps growing back to a larger size, it's because it needs that much space on a regular basis. Instead of shrinking it, you should probably leave it at the size it needs so it won't have to take the time and trouble to grow when needed.


    And then again, I might be wrong ...
    David Webb

  • Database files never reduce in size automatically.

    Don't run that backup with truncate only. That's just a log backup which doesn't write the backup file, no special magic in it. However it breaks your log chain and reduces your ability to recover

    Don't shrink logs to 1. Hell, make that don't shrink logs without a good reason. If your log file needs to be a particular size for regular operation, leave it that size. Shrink/grow/shrink/grow just wastes time.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • kramaswamy,

    I would urge you to go to http://www.sqlskills.com and search "transaction logs" Anything you read there will help you but there are two specific articles by Kimberly Tripp that you should look for. One is on transaction log throughput and one is on VLFs. Managing your transaction log well is very important.

  • GilaMonster - Thursday, December 4, 2014 1:41 AM

    Database files never reduce in size automatically.Don't run that backup with truncate only. That's just a log backup which doesn't write the backup file, no special magic in it. However it breaks your log chain and reduces your ability to recoverDon't shrink logs to 1. Hell, make that don't shrink logs without a good reason. If your log file needs to be a particular size for regular operation, leave it that size. Shrink/grow/shrink/grow just wastes time.

    So if I have a number of databases where the log file grows  after a report is run every month. The logs are backed up every 5 minutes. I can accept they need a certain amount of space. What I find difficult to understand is why the logs would grow to twice that size the next month and 3 times the size the following month completely out of proportion with the change in the data volume. If I then shrink those log files (and yes, I know I should not, but I have to justify the additional storage) it will stay small for a period of time and when the next report is run it grows to the size I first accepted as a result of reporting. After that we then grow month after month as I previously described

  • Note, this is a 5 year old thread.  To help troubleshoot why a transaction log file is growing, you might be able to use the default trace using a query like this:
    SELECT td.DatabaseName, td.Filename, te.name AS Event, (IntegerData*8)/1024 AS Change_MB, td.StartTime, (Duration/1000) AS Duration_sec,
      td.LoginName, td.HostName, td.ApplicationName, td.spid, td.ClientProcessID, td.IsSystem, td.SqlHandle, td.TextData
    FROM sys.traces t
      CROSS APPLY ::fn_trace_gettable(t.path, default) td
      INNER JOIN sys.trace_events te ON td.EventClass = te.trace_event_id
    WHERE t.is_default = 1
      AND td.EventClass IN (92,93) --data and log autogrowths
    ORDER BY td.StartTime DESC;

    Here are a couple of articles on the subject:
    https://blogs.msdn.microsoft.com/sqlblog/2006/09/18/why-is-my-transaction-log-file-growing-rapidly-how-do-i-stop-it-from-eating-up-all-my-disk-space/
    https://sqlity.net/en/1805/eight-reasons-transaction-log-files-keep-growing/

  • Chris Harshman - Wednesday, January 16, 2019 9:36 AM

    Note, this is a 5 year old thread.  To help troubleshoot why a transaction log file is growing, you might be able to use the default trace using a query like this:
    SELECT td.DatabaseName, td.Filename, te.name AS Event, (IntegerData*8)/1024 AS Change_MB, td.StartTime, (Duration/1000) AS Duration_sec,
      td.LoginName, td.HostName, td.ApplicationName, td.spid, td.ClientProcessID, td.IsSystem, td.SqlHandle, td.TextData
    FROM sys.traces t
      CROSS APPLY ::fn_trace_gettable(t.path, default) td
      INNER JOIN sys.trace_events te ON td.EventClass = te.trace_event_id
    WHERE t.is_default = 1
      AND td.EventClass IN (92,93) --data and log autogrowths
    ORDER BY td.StartTime DESC;

    Here are a couple of articles on the subject:
    https://blogs.msdn.microsoft.com/sqlblog/2006/09/18/why-is-my-transaction-log-file-growing-rapidly-how-do-i-stop-it-from-eating-up-all-my-disk-space/
    https://sqlity.net/en/1805/eight-reasons-transaction-log-files-keep-growing/

    In addition I would look at the growth setting on the transaction log files.  That could explain the growth issues as well.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply