Dealing with Transaction Log Growth

  • You don't shrink the log unless it was an unusual growth.

    If you want to shrink the log to avoid VLF, then after shrinking the log increase the log size to its regular size.

  • Very true @CozzaroNero. Best to understand the database behaviour and plan for the optimal transaction log  size. Shrink is a quick fix if one gets into trouble.

    Br. Kenneth Igiri
    https://kennethigiri.com
    All nations come to my light, all kings to the brightness of my rising

  • #3 in your recommendations is incorrect.  If your database is set to full or bulk-logged recovery models - you absolutely need to be running frequent transaction log backups.

    If you are using log shipping - that implies performing transaction log backups and shipping those log files to the secondary node to be restored.  As you stated, setting up log shipping creates the jobs to backup, copy and restore.

    It is misleading to state that AlwaysOn Availability Groups implement log backups in the background.  If someone were to assume that setting up an AG automatically creates transaction log backups - and did not set a job up to perform log backups, they are going to find out very quickly when the transaction log fills the drive on each node in the cluster and all activity stops.

     

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeffrey Williams wrote:

    #3 in your recommendations is incorrect.  If your database is set to full or bulk-logged recovery models - you absolutely need to be running frequent transaction log backups.

    If you are using log shipping - that implies performing transaction log backups and shipping those log files to the secondary node to be restored.  As you stated, setting up log shipping creates the jobs to backup, copy and restore.

    It is misleading to state that AlwaysOn Availability Groups implement log backups in the background.  If someone were to assume that setting up an AG automatically creates transaction log backups - and did not set a job up to perform log backups, they are going to find out very quickly when the transaction log fills the drive on each node in the cluster and all activity stops.

    Thanks for this. Huge miss. Updated.

    Br. Kenneth Igiri
    https://kennethigiri.com
    All nations come to my light, all kings to the brightness of my rising

  • The author of this article recommends that shrinking the log file is a solution to this issue - but the log file has grown because of transactional activity. Unless these transactions were a one-off (such as an unusual data load) the file will regrow to this size during normal activity - shrinking the log file therefore is a self-defeating activity. The DBA needs to understand why the log file is growing and plan actions around this - e.g. resize it to meet likely activity/undertake more frequent backups.

    Failure to understand why the log file is growing (and planning capacity around it) may lead to out-of-space disk issues - this is especially true on systems with multiple databases where all log files may grow simultaneously.

    File-shrinking in SQL Server is almost invariably the wrong solution unless there has been a one-off activity such as a data load or a data drop - mostly it is self-defeating busywork which will be nullified by normal database growth.

  • Arthur Pewtey wrote:

    The author of this article recommends that shrinking the log file is a solution to this issue - but the log file has grown because of transactional activity. Unless these transactions were a one-off (such as an unusual data load) the file will regrow to this size during normal activity - shrinking the log file therefore is a self-defeating activity. The DBA needs to understand why the log file is growing and plan actions around this - e.g. resize it to meet likely activity/undertake more frequent backups.

    Failure to understand why the log file is growing (and planning capacity around it) may lead to out-of-space disk issues - this is especially true on systems with multiple databases where all log files may grow simultaneously.

    File-shrinking in SQL Server is almost invariably the wrong solution unless there has been a one-off activity such as a data load or a data drop - mostly it is self-defeating busywork which will be nullified by normal database growth.

    Precisely Arthur. On of activity. Hence the expression "If in the unfortunate event you run out of disk space or exceed your budgeted disk space,". Needing to shrink already with DBA is in trouble and needs a quick fix not an ideal fix.

    Br. Kenneth Igiri
    https://kennethigiri.com
    All nations come to my light, all kings to the brightness of my rising

  • Comments posted to this topic are about the item Dealing with Transaction Log Growth

    Br. Kenneth Igiri
    https://kennethigiri.com
    All nations come to my light, all kings to the brightness of my rising

  • Quite contrary to popular belief, index REBUILDs are NOT a problem compared to REORGANIZEs.

    Here's just one example.  I had a table that had an ever increasing Clustered Key that contained 149 Million rows in 146.3 GB at 12% logical fragmentation.  As an experiment, I did  backup so I could restore it in that condition as often as I needed to to do some testing on a test box.

    After the first restore, I took some measurements of the BEFORE condtions again to verify that nothing had changed.  I'd shrunk the log file to just 40GB.  I made sure that I was in the FULL recovery model and did a full backup to NUL to make sure. Then, I did a REBUILD.  As expected, the log file grew to just  bit of 147GB and it took 01:20:45 to complete.

    I repeated all of that except I did a REORGANIZE.  Although it too about the same time, the log file EXPLODED to 227GB!!! And, it didn't do quite as good a job at defragmenting the data.

    I've repeated such test on smaller tables, as well with equally impressive stats.  You cannot believe in the myth of REORGANIZE being soe tame, little kitty because it' just not so.

    While I was at it, I went through the same setup and then slipped into the BULK LOGGED Recovery Model and did another REBUILD... Folks, this isn't a misprint... It only took 00:12:45 (less than just 13 minutes and the log file only grew to 37GB.

    Yeah, I know... all you folks running replication and AG, etc?  I DO feel bad for you because you'll break anything that depends on the log file and always having to be in the FULL Recovery Model.  I love my clustered servers! 😀

    But, even with replication and AG, stop just trusting all the decades old propaganda that REOGANIZE is doing you any favors resource-wise.  In fact, since most people don't actually know what it does, you should just stop using it until you do.  What does it frequently do?  The stuff above AND it also frequently perpetuates fragmentation.  In fact, it IS the primary reason that Random GUIDs have such a bad name.

    For proof of that, please see the following video presentation.... what it to the very end after the Q'n'A section... you just won't believe it and, yet, I have the code to prove it. 😀

    https://www.youtube.com/watch?v=rvZwMNJxqVo

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden wrote:

    Quite contrary to popular belief, index REBUILDs are NOT a problem compared to REORGANIZEs.

    Here's just one example.  I had a table that had an ever increasing Clustered Key that contained 149 Million rows in 146.3 GB at 12% logical fragmentation.  As an experiment, I did  backup so I could restore it in that condition as often as I needed to to do some testing on a test box.

    After the first restore, I took some measurements of the BEFORE condtions again to verify that nothing had changed.  I'd shrunk the log file to just 40GB.  I made sure that I was in the FULL recovery model and did a full backup to NUL to make sure. Then, I did a REBUILD.  As expected, the log file grew to just  bit of 147GB and it took 01:20:45 to complete.

    I repeated all of that except I did a REORGANIZE.  Although it too about the same time, the log file EXPLODED to 227GB!!! And, it didn't do quite as good a job at defragmenting the data.

    I've repeated such test on smaller tables, as well with equally impressive stats.  You cannot believe in the myth of REORGANIZE being soe tame, little kitty because it' just not so.

    While I was at it, I went through the same setup and then slipped into the BULK LOGGED Recovery Model and did another REBUILD... Folks, this isn't a misprint... It only took 00:12:45 (less than just 13 minutes and the log file only grew to 37GB.

    Yeah, I know... all you folks running replication and AG, etc?  I DO feel bad for you because you'll break anything that depends on the log file and always having to be in the FULL Recovery Model.  I love my clustered servers! 😀

    But, even with replication and AG, stop just trusting all the decades old propaganda that REOGANIZE is doing you any favors resource-wise.  In fact, since most people don't actually know what it does, you should just stop using it until you do.  What does it frequently do?  The stuff above AND it also frequently perpetuates fragmentation.  In fact, it IS the primary reason that Random GUIDs have such a bat name.

    For proof of that, please see the following video presentation.... what it to the very end after the Q'n'A section... you just won't believe it and, yet, I have the code to prove it. 😀

    https://www.youtube.com/watch?v=rvZwMNJxqVo

    Finally got round to reading the complete post. This is very insightful, Jeff.

    Br. Kenneth Igiri
    https://kennethigiri.com
    All nations come to my light, all kings to the brightness of my rising

  • Thanks for the feedback, Kenneth.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 10 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply