Any point in EVER shrinking trans log?

  • I have databases that generally do not accumulate much in the transaction log. There is an automated script that runs a backup stored procedure every 15 minutes, doing backup of the trans log one the quarter, half and three quarter hour and differential on the full hour during the workday, and a full every night at 11pm, all geared to execute ony if the database has undergone a change during the preceding appropriate interval.

    The activity is fairly light, so not much ever accumulates in the trans log. However, I occasionally do things like mass imports and such that make the trans log swell quite a lot. This does not happen much - a few times per year, but it's generally a pretty (relatively) massive undertaking. The most active database is a bit over 100 MB, its trans log is currently 560 MB, as a result of several large actions and some missed backup cycles.

    Is there EVER any point in shrinking the trans log, even when the growth is a result of such random and infrequent activity? Does an unnecessarily large log slow anything down, or are there any other benefits from releasing the unused space? Disk space is absolutely NOT an issue - I'm at barely 3% usage and no real expectation of that radically increasing anytime soon.

  • An overgrown log file can have too many VLFs, which can adversely affect performance.

    On some disks, the autogrowth can result in a less contiguous log file, which can also affect performance.

    If the log has grown very excessively, only a shrink will reduce its size.

    So, yes, sometimes there are good reason(s) to shrink a log file.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher (9/16/2013)


    An overgrown log file can have too many VLFs, which can adversely affect performance.

    On some disks, the autogrowth can result in a less contiguous log file, which can also affect performance.

    If the log has grown very excessively, only a shrink will reduce its size.

    So, yes, sometimes there are good reason(s) to shrink a log file.

    Okay, thanks. I guess I'll do a shrink and index rebuild after I perform one of these mass mayhem events, and leave it alone otherwise.

  • I would almost say, in that situation, if you've got the disk space, it might be easiest to do the following:

    1. Backup the TLog

    2. Shrink it down as small as possible (don't worry!)

    3. Set it to a size large enough to handle your data loads and a bit more (so using your 560MB one, maybe make it ~650MB)

    4. Either turn off autogrowth, or set it to a more "reasonable" number to avoid a lot of VLFs

    IIRC, and someone correct me if I'm wrong, by shrinking the log, then growing it in one "chunk" you'll get a better number of VLFs.

    OK, here's the article I was thinking of in terms of the VLFs[/url] from SQLSkills.

    Plan your log size to have a "reasonable" number of VLFs, size it, and leave it (if you've got the disk space, and now-a-days, disk is cheap...)

    Jason

  • If you need a (very) large log, rather than a single very large allocation, you might be better off growing in somewhat smaller chunks. For example, if you need a 20GB, maybe shrink to smallest size, add space to 5GB, grow to 10GB, grow to 15GB, then grow to 20GB. That keeps the size of the VLFs from being extremely large.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • What I try and do is size it decently from the get go. However, best laid plans.... ya know...

    anyway. About once a year I will try and run this command against active application databases which will tell you how many fragments the transaction log is in. If it is a very high number I will backup the trans log, shrink it down to 100MB and then resize it to what it should be.

    One time I restored a SQL2000 DB into 2008 for an upgrade and the SQL Server log gave me a warning that the trans log was in a very high number of fragments.

    This is the command:

    dbcc loginfo()

  • I don't normally need a very large log. This size is an aberration, created only when I do the occasional large import, and combine it with a malfunction of my backup routine, so after several iterations, it ballooned to this size. Disk space means nothing - I have almost 3TB worth of room, and I'm using only a few percent of it. I can make anything I need as large as is appropriate for best performance and not miss the space one bit.

    It sound like the best move would be to shrink both the log and data files, then run it under normal conditions for a while and see how it looks. Once I have a reasonable baseline, I can give myself a comfortable margin above that, but leave autogrow on for safety's sake and peace of mind.

    When I'm doing the occasional mass import, I can keep a close eye on things and make sure I don't do two in a row without a trans log backup between, and that should keep it all under control.

    Appreciate the insights...

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply