Does running transaction log backup actually truncate & shrink log size?

  • I'm having some problem managing my SQL Servers transaction logs, it tend to grow very huge and eat up all the space available. I have a maintenance plan to back up the transaction logs every 2 hours, however it didn't manage to reduce the size of the .ldf file. Do I need to set anything specific on my maintenance plan for it to reduce the size of the logs after backup? The DB is running on FULL RECOVERY Model.

  • No, don't shrink your logs after backing up.  They'll only grow again, causing fragmentation and unnecessary use of resources.  Increasing the frequency of log backups may reduce the size that your transaction log file needs to be.  Type Managing Transaction Logs into your favourite search engine for more details.

    John

  • Taking a transaction log backup will truncate the transaction log (i.e. make it available for re-use) but it won't shrink it. 

    If your log is growing too quickly, increase the frequency of your transaction log backups (maybe to every 10 minutes?). I would do a one-off shrink (of the transaction log, not the data file) before doing that.

  • Beatrix Kiddo - Monday, January 28, 2019 2:42 AM

    Taking a transaction log backup will truncate the transaction log (i.e. make it available for re-use) but it won't shrink it. 

    If your log is growing too quickly, increase the frequency of your transaction log backups (maybe to every 10 minutes?). I would do a one-off shrink (of the transaction log, not the data file) before doing that.

    Just please, note her language, "a one-off shrink". Shrinking the logs over and over, followed by growing them over and over, is a horrific way to manage your systems. Set the logs to an appropriate size and time the log backups so that they protect the system (you do have your Recovery Point Objectives defined with the business right?) and truncate the data internally, keeping the logs to the appropriate size.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Thanks all for the input.
    So from what I understand, there's no need to always perform any shrinking of the .ldf file correct?
    I should only increase the frequency of the transactional logs backup in order to avoid any issue in logs folder getting full right?
    Not to mention the logs need to be size accordingly too right?

  • Yep. You've got it.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey - Monday, January 28, 2019 6:49 AM

    Yep. You've got it.

    Awesome, let me do some tweaking to my backup frequency.

  • Remember that your system generates xx log per unit of time. You might see something like:

    8am-5pm 5GB log space
    5pm-7pm - 10MB log space
    7pm-8pm - 10GB log space for data load
    8pm-8am - 50 MB log

    What size log do I need? It depends on log backups. If I am backing up every hour, I'd still need a 10GB log for the 7p-8p time frame. At other times, I might never need more than 1GB during any hour. If I could run log backups every 15 minutes, and there was a constant load during the evening, I might be able to get by with a 3GB log.

    Look at your log backup sizes and then you'll have an idea of workload at different times. Your log should be sized for the peak workload between log backups. If you make more frequent backups, you might have a smaller log file, but it has to handle the load and have a pad in case the load changes slightly.

  • If the log file is still actively growing, you may be able to figure out what is causing it to actively grow by looking at the default trace.  Here's a query to quickly take a peek at recent log autogrowths:
    --log autogrowth
    SELECT td.DatabaseName, td.Filename, te.name AS Event, (IntegerData*8)/1024 AS Change_MB, td.StartTime, (Duration/1000) AS Duration_sec,
      td.LoginName, td.HostName, td.ApplicationName, td.spid, td.ClientProcessID, td.IsSystem, td.SqlHandle, td.TextData
    FROM sys.traces t
      CROSS APPLY ::fn_trace_gettable(t.path, default) td
      INNER JOIN sys.trace_events te ON td.EventClass = te.trace_event_id
    WHERE t.is_default = 1
      AND td.EventClass = 93
    ORDER BY td.StartTime DESC;

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply