Transaction Log Question

  • Bear with me here. I have far more experience with database development than I do administration.....

    Here's my understanding of transaction logs when a database is in full recovery mode:

    I know they record all transactions against and modifications to a database, and I know how they fit into a database recovery process. What I'm not grasping is the sizing.

    If the log is initially sized at 500MB then transactions will begin to fill the file and when I take a log backup, the log is truncated and the transactions are cleared. The file remains at 500MB, right?

    If log backups don't occur, the log file can fill up and even exceed the 500MB--and the file will grow to whatever size it needs to to accommodate the transactions--could be 50GB or more. In this situation, the log file should be backed up (and that backup be put on a schedule) because that's the right thing to do and it will truncate the log. However, since the file has grown to 50GB in this example, it's likely that a DBCC SHRINKFILE should be run against the log to get it back down in size. However, it can't be sized below what its initial size of 500MB was. Is my understanding correct, and is everything I've written accurate?

    If so, here's where I'm stuck:

    Why wouldn't you initially make the transaction log as small as possible and let it grow at its own rate and perform t-log backups, so that it's not backing up a 500MB file that's only partially populated each time? Why must it initially be sized so high?

    I know there are entire books on transaction log management and I will be reading them (e.g., the one Tony Davis and Gail Shaw wrote), but I was hoping to get an answer to this beforehand so that I had some understanding before getting into it.

    As always,

    Thanks!

    Mike

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • Mike

    Three reasons, really:

    (1) To minimise physical fragmentation. If your file grows in small chunks, you're likely to have bits of it scattered all over the disk.

    (2) To avoid having the file growth operation using disk and CPU resources when it grows, which is likely to be at a time when the database is busy.

    (3) To claim space on the disk that the file needs. Depending on how well your disk is managed, something else could come a long and put lots of files on your disk, filling it up and preventing your log file from growing when it needs to.

    John

  • Initially, size it twice as large as the largest index(clustered or non or column store) to allow an index rebuild and rollback without having to grow mid operation.

    Catch-all queries done right [/url]
    Gail Shaw's Performance Blog[/url]

  • Thank you!

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • . However, it can't be sized below what its initial size of 500MB was.

    This is important, you should create the log small (64mb) initially, then grow it to the size you actually want. That way if required you can shrink it to a minimal size. A log file cannot be shrunk to less than 2 VLFs

    Why wouldn't you initially make the transaction log as small as possible and let it grow at its own rate and perform t-log backups, so that it's not backing up a 500MB file that's only partially populated each time? Why must it initially be sized so high?

    The backup will only be the size of the active part of the file, so a backup of the 500MB file might be only a few MB big.

    read this [/url]article by Kimberly Tripp as well

    ---------------------------------------------------------------------

  • Good to know. Thank you!

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • Mike Scalise (9/27/2016)


    ...

    However, it can't be sized below what its initial size of 500MB was. Is my understanding correct, and is everything I've written accurate?

    ...

    Actually, that's not really true.

    The log file can be shrunk below its initial size. The ultimate limit for a shrink of the log file is actually determined by the fact that a log file must have at minimum two VLFs (as George pointed out). The larger the initial size of the log file, the bigger those initial VLFs are (determined by the algorithm mentioned at http://www.sqlskills.com/blogs/paul/important-change-vlf-creation-algorithm-sql-server-2014/).

    It's true that with SHRINKDATABASE the entire database cannot be shrunk below its initial size, however that does not apply to SHRINKFILE.

    From https://technet.microsoft.com/en-us/library/ms189080(v=sql.105).aspx:

    When using the DBCC SHRINKDATABASE statement, you cannot shrink a whole database to be smaller than its original size. Therefore, if a database was created with a size of 10 MB and grew to 100 MB, the smallest the database could be reduced to is 10 MB, even if all the data in the database has been deleted.

    However, you can shrink the individual database files to a smaller size than their initial size by using the DBCC SHRINKFILE statement. You must shrink each file individually, instead of trying to shrink the whole database.[/i]

    (my emphasis)

    Also, from https://msdn.microsoft.com/en-us/library/ms189493.aspx:

    You can shrink a file to a size that is less than the size specified when it was created.

    It's also fairly easy to demonstrate with these steps:

    1) Create a DB with a log file of, say, 64 MB and put it in the full recovery model

    2) Generate enough log to force the log to grow

    3) Back up the log to allow it to truncate

    4) Generate just enough activity to move the head of the active portion of the log to the first or second VLF (since you can't shrink past last active VLF)

    5) Back up the log again to allow it to truncate

    6) After confirming that only the first VLF or first two are active, shrink the log down as far as possible.

    You'll be left with just two of your initial VLFs, which will be about 32 MB if you decided to do this with a 64 MB initial size. In your case, with a 500 MB initial log file size, shrinking down to the minimum of 2 VLFs would leave the file around 125 MB.

    It can be demonstrated much more simply by just creating a database and then shrinking the log file as far as it will go, but I wanted to cover the fact that neither of 1) being in the full recovery model or 2) having had the log grow at some point prevents shrinking below the initial size.

    Having said all that, while you can shrink log files below their initial size, the advice given by others previously in the thread still holds. If drive space is at a premium, then it's a decent idea to keep the size at creation small and then manually grow it out to an appropriate size for the workload, so you can shrink down as low as possible in a crunch.

    Cheers!

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply