Performance Issue with large number of Virtual Log Files in SQL Server Transaction Log

By:   |   Comments (6)   |   Related: 1 | 2 | 3 | 4 | More > Database Administration


Problem

SQL Server transaction log files have an internal structure called the Virtual Log File or VLF. When the number of VLFs grow out of control due to autogrowth the log can become fragmented and cause delay. In this tip we look at how to see how many VLFs exist as well as how this can be reduced to a more reasonable number.

Solution

For optimal performance a database should have roughly tens of VLFs. If there are hundreds or thousands of VLFs log file write performance can degrade and transaction log backup speed can degrade. You should try and avoid both. For an explanation of how VLF's work see this article How to determine SQL Server database transaction log usage.

For fun I created a script to create a database and then force it to have a large number of VLFs. The number of VLF's is so large, because the file growth for the log file is set at 1 MB. Each time it grows two VLFs are added After the database is created an initial backup is made so that transaction log backups can be done. Then we see that there are just 3 VLF's to start. A table is created and populated repeatedly with the help of some data from the AdventureWorks sample database. I let the loop run for 100 iterations which took 4 minutes even with a solid-state-disk. You might want to limit the number of iterations unless you're very patient. At the end of the script the command DBCC LOGINFO shows the number of VLFs

USE MASTER
GO

DROP DATABASE ns_lots_of_vlfs
GO

CREATE DATABASE ns_lots_of_vlfs
GO

ALTER DATABASE ns_lots_of_vlfs SET recovery FULL
GO

ALTER DATABASE ns_lots_of_vlfs
modify
FILE (name=ns_lots_of_vlfs_log
, size=1 mb
, filegrowth=1mb)
GO

-- only three VLF's to start
BACKUP DATABASE ns_lots_of_vlfs
TO DISK = 'c:\temp\ns_lots_of_vlfs_full.bak'
GO
Processed 168 pages for database 'ns_lots_of_vlfs', file 'ns_lots_of_vlfs' on file 2.
Processed 2 pages for database 'ns_lots_of_vlfs', file 'ns_lots_of_vlfs_log' on file 2.
BACKUP DATABASE successfully processed 170 pages in 0.080 seconds (16.583 MB/sec).

DBCC loginfo('ns_lots_of_vlfs')
GO

FileId FileSize StartOffset FSeqNo Status Parity            CreateLSN
------ -------- ----------- ------ ------ ------ --------------------
     2   253952        8192     42      2     64                    0
     2   327680      262144      0      0      0                    0
     2   458752      589824      0      0      0    42000000005300001
(3 row(s) affected)
DBCC execution completed. If DBCC printed error messages, 
contact your system administrator.

USE ns_lots_of_vlfs
GO


CREATE TABLE grow_quick (
id bigint NOT NULL IDENTITY(1,1),
fn NVARCHAR(255),
ln NVARCHAR(255),
aaaaaaas NVARCHAR(4000) DEFAULT (REPLICATE(N'a', 4000))
)
GO

DECLARE @i INT
SET
@i = 0

-- You may want fewer itterations
WHILE @i < 100 BEGIN

INSERT INTO
grow_quick (fn, ln)
SELECT TOP 2000 FirstName, LastName
FROM AdventureWorks.Person.Contact
DELETE FROM grow_quick

SET @i = @i + 1
END

DBCC
loginfo('ns_lots_of_vlfs')
GO

FileId FileSize StartOffset FSeqNo  Status Parity            CreateLSN
------ -------- ----------- ------ ------- ------ --------------------
     2   253952        8192     42       2     64                    0
     2   327680      262144     43       2     64                    0
     2   458752      589824     44       2     64    42000000005300001
     2   253952     1048576     45       2     64    44000000025600016
     2   253952     1302528     46       2     64    44000000025600016
     2   253952     1556480     47       2     64    44000000025600016
     2   286720     1810432     48       2     64    44000000025600016
.
. Rows omitted
.
     2   286720  3499859968      0       0      0 13314000000011300006
     2   253952  3500146688      0       0      0 13317000000003300006
     2   253952  3500400640      0       0      0 13317000000003300006
     2   253952  3500654592      0       0      0 13317000000003300006
     2   286720  3500908544      0       0      0 13317000000003300006
     2   253952  3501195264      0       0      0 13317000000031400014
     2   253952  3501449216      0       0      0 13317000000031400014
     2   253952  3501703168      0       0      0 13317000000031400014
     2   286720  3501957120      0       0      0 13317000000031400014
(13359 row(s) affected)
DBCC execution completed. If DBCC printed error messages, 
contact your system administrator.

In this example it created 13,359 VLFs wich is a lot. Too many. The cause is the small growth size on the log file, which caused two additional VLF's to be added each time the file was grown. Okay now that we know how to create this mess, let's fix it.

Only a few steps are required to fix the problem. The transaction log must be backed up and the DBCC SHRINKFILE command run. Sometimes the database doesn't shrink right away and the process has to be repeated. I've written the procedure below for that, ns_shrink_db_log and you'll find it in the article How to shrink a transaction log file in SQL Server 2008. Create the SP from the this tip and execute it as shown below.

EXEC [dbo].[ns_shrink_db_log] 'ns_lots_of_vlfs'
, 2, 'c:\temp\'
, 'ns_lots_of_vlfs_backup', 10
GO
Starting size of [ns_lots_of_vlfs].[ns_lots_of_vlfs_log] is 3345 MB recovery model = FULL
BACKUP LOG [ns_lots_of_vlfs]  to disk = 'c:\temp\ns_lots_of_vlfs_backup1.trn'
Processed 411635 pages for database 'ns_lots_of_vlfs', file 'ns_lots_of_vlfs_log' on file 1.
BACKUP LOG successfully processed 411635 pages in 86.644 seconds (37.116 MB/sec).
Cannot shrink log file 2 (ns_lots_of_vlfs_log) because the logical log file located 
at the end of the file is in use.
  DbId      FileId CurrentSize MinimumSize   UsedPages EstimatedPages
------ ----------- ----------- ----------- ----------- --------------
    61           2      424925          72      424920             72
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
BACKUP LOG [ns_lots_of_vlfs]  to disk = 'c:\temp\ns_lots_of_vlfs_backup2.trn'
Processed 10 pages for database 'ns_lots_of_vlfs', file 'ns_lots_of_vlfs_log' on file 1.
BACKUP LOG successfully processed 10 pages in 0.484 seconds (0.160 MB/sec).
  DbId      FileId CurrentSize MinimumSize   UsedPages EstimatedPages
------ ----------- ----------- ----------- ----------- --------------
    61           2         256          72         256             72
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Final size of [ns_lots_of_vlfs].[ns_lots_of_vlfs_log] is 2 MB

The issue of having to repeat the BACKUP LOG and DBCC SHRINKFILE steps did happen this time. In the messages from DBCC you can see why. The last VLF happened to be in use, so the file could not be shrunk to the desired size. The key message is hilighted in red above.

Now that the transaction log is down to 2 megabytes let's grow it back to reasonable size. What's a reasonable size for the transaction log on ns_lots_of_vlfs? The data size of the database is only 18 megabytes. But the high activity rate on the database is only going to cause the log to grow to it's previous size of 3.3 GB as soon as some user runs that script again. It might even take more space if there aren't sufficient transaction log backups. Based on the previous size I'll give it a 4GB transaction log. I'll also set a file growth number. I don't know if the file will get much bigger, but I'll set the autogrow size at 1 GB, which won't take too long each time it grows. Here's the script to complete the job:

ALTER DATABASE ns_lots_of_vlfs
modify
FILE (name=ns_lots_of_vlfs_log
, size=4096 mb
, filegrowth=1024 mb)
GO

DBCC loginfo('ns_lots_of_vlfs')
GO

FileId  FileSize StartOffset FSeqNo Status Parity            CreateLSN
------ --------- ----------- ------ ------ ------ --------------------
     2    253952        8192  13320      2    128                    0
     2    327680      262144     43      0     64                    0
     2    458752      589824     44      0     64    42000000005300001
     2    253952     1048576     45      0     64    44000000025600016
     2    253952     1302528     46      0     64    44000000025600016
     2    253952     1556480     47      0     64    44000000025600016
     2    286720     1810432     48      0     64    44000000025600016
     2 268304384     2097152      0      0      0 13320000000005700001
     2 268304384   270401536      0      0      0 13320000000005700001
     2 268304384   538705920      0      0      0 13320000000005700001
     2 268304384   807010304      0      0      0 13320000000005700001
     2 268304384  1075314688      0      0      0 13320000000005700001
     2 268304384  1343619072      0      0      0 13320000000005700001
     2 268304384  1611923456      0      0      0 13320000000005700001
     2 268304384  1880227840      0      0      0 13320000000005700001
     2 268304384  2148532224      0      0      0 13320000000005700001
     2 268304384  2416836608      0      0      0 13320000000005700001
     2 268304384  2685140992      0      0      0 13320000000005700001
     2 268304384  2953445376      0      0      0 13320000000005700001
     2 268304384  3221749760      0      0      0 13320000000005700001
     2 268304384  3490054144      0      0      0 13320000000005700001
     2 268304384  3758358528      0      0      0 13320000000005700001
     2 268304384  4026662912      0      0      0 13320000000005700001
(23 row(s) affected)
DBCC execution completed. If DBCC printed error messages,
contact your system administrator.

The database has 23 VLF's, which is a reasonable number. Each time the file is grown by 1 GB another 16 VLF's will be created. We won't have 13 thousand VLF's again.

Next Steps
  • Check the number of VLF's in your databases with DBCC LOGINFO
  • If the number is in the hundreds or thousands consider fixing it with the procedure set here
  • Review the file growth numbers on your transaction log files. Are they big enough to avoid this issue?


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Andy Novick Andy Novick is a SQL Server Developer in the Boston area with 25 years of database and application development experience.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Wednesday, July 3, 2013 - 2:05:36 AM - John Back To Top (25685)

Agreed that it results in 4 VLFs, not 2.

You can even see this in the DBCC LOGINFO output posted in this blog.  There are 4 virtual logs indicated in the output for each autogrowth.  Filesizes 253952, 253952, 253952 & 286720.  The 4 virtual logs total 1048576, which is 1024KB, or 1MB - the initial autogrowth value.


Tuesday, February 5, 2013 - 11:22:01 AM - Mark Hions Back To Top (21930)

Most documentation says that a 1MB increment results in 4 new VLFs, not just 2.

 

Also, the culprit in Stephen's example is not index maintenance, but a type conversion error in the SQL Server code.  See http://connect.microsoft.com/SQLServer/feedback/details/481594/log-growth-not-working-properly-with-specific-growth-sizes-vlfs-also-not-created-appropriately

and http://www.sqlskills.com/blogs/paul/bug-log-file-growth-broken-for-multiples-of-4gb/

 

 

 


Monday, January 9, 2012 - 11:43:40 AM - Adele Back To Top (15559)
I'm out of league here. Too much brain power on diplsay!

Thursday, September 2, 2010 - 2:45:25 PM - Jay Kusch Back To Top (10112)
I see where this causes issue with replication but would it also affect mirroring?


Thursday, September 2, 2010 - 10:14:23 AM - Stephen Merkel Back To Top (10111)
We encountered a nasty bug in SQL Server 2005 related to log file growth and VLFs. When the log growth parameter was set to exactly 4GB  (4096 MB), it resulted in an explosion in the number of VLFs. Changing the parameter by only a little -- 4095 MB completely resolved the problem. The usual culprit was index maintenance (rebuilding) growing the log. The impact of too many VLFs can really be seen during recovery after a cluster node failover, when the engine must read through the many extra VLFs before completing the recovery.


Thursday, September 2, 2010 - 9:57:25 AM - John Stafford Back To Top (10110)
In particular, check any databases where you are replicating data from - if you end up with lots of VLF's for the log reader to churn through repliation will slow right down - or even stop completely!















get free sql tips
agree to terms