Transaction Logs

  • Hello.

    I have  lot of DBs that have very large transaction Log files and some have a huge number of VLFs.
    We don't have an issue with SQL Server startup times, so what is the performance penalty of having the above large file and number of VLFs?

    I have read lots of Blogs and posts (Paul Randall, Kimberley Tripp etc) and examined DBCC LOGINFO and DBCC SQLPERF(LOGSPACE) as well as running DMVs (and the SPs in the BrentOzar First Responder Kit) with a view to reducing the size of the Transaction Log File, and the number of VLFs.
    I am currently looking at the 8 steps from Kimberley Tripp and trying to understand how this works for me.

    For example, I have a database that SSMS says is 39GB and the DBCC commands suggest that the Log file is 27GB and has 204 VLFs which is obviously not right (and is one of the smaller numbers of VLFs).
    Looking at the Log, currently VLF 190 (of 204) is Active and in Use - so what happens if I try to truncate it and change the Size - i.e. will it be able to free up VLFs both before and after 190 and will 190 become the first VLF in the new file? 

    I would try it in Test but don't have anywhere (that I know of) to put the Backup in the first Step as we don't do this in Test.
    Can I just take a Manual checkpoint instead of a Backup or is this not a valid alternative?

    Thanks
    Steve O.

  • Still browsing and this discusses the thoughts I had in more detail
    https://dba.stackexchange.com/questions/35182/manually-set-log-file-size-after-shrink-sql-server-2008-r2

    Just pondering the best way to tackle this.

    Steve O.

  • 204 is not a problematic number. I wouldn't worry with that number. Several thousands is where I'd probably start to worry

    If you do want to fix it,  neither backup nor checkpoint is involved.
    Shrink the log to 0. Has to be done when DB's inactive, and may require that you repeatedly shrink (possibly with some fake transactions to cause the log to wrap around). Then regrow log to required size in sensible sized chunck (see Kimberly's post for details)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • If you are concerned with the number of VLFs a database has as too many, we handle that by setting up a job which counts the number of VLFs per database and sends an email when it reaches a certain threshold.  I think that threshold is alert anything over 1000 vlfs.  There is no really high set number, however, if a large number of VLFs exist for a database, recovery times can be longer due to Sql Server having to handle large amounts of VLFs when it does its Recovery phases.  When an alert is received for a database with a high VLF count, we simply set the database to Simple, which initiates a checkpoint, then, shrink the transaction log, then immediately take a FULL backup since the transaction log chain is broken (point in time recovery), which resets the chain, the a t-log backup is taken soon afterwards and the process is reset.  At this point, the number of vlfs should be at a more manageable number.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply