Blog Post

Autogrow bug – fixed with Denali

,

Recently I ran into a bug related to autogrow of the log file. Because this was the second time I encounted this particular bug, I decided to blog about it.

The symptoms of the problem was the one specific database took a very long time to be brought online after a simple service restart. The database was simply staying in the “in recovery” state for a long time. The reason for this was a very high number of Virtual Log Files in the database. Let me set up a demo for you, so you can see what this is all about:

USE master
GO
CREATE DATABASE VLFHell
GO

You can see how many VLF’s are in your database by running this:

DBCC LOGINFO

image

 

This shows, that there are a total of 5 virtual logfiles within the logfile of the new database, and you can see the size of each virtual logfile. We will now take a full backup of the database, to make the recovery model work as full (until the first full backup, the logfile will work as if it was in simple mode)

The default configuration for a SQL Server is, that the logfile will autogrow by 10%, so let’s see how this will behave if we generate some dummy data:

USE [VLFHell]
GO
CREATE TABLE MakeMyLogFileGoCrazy
(
    id bigint identity(1,1),
    WasteOfSpace1 uniqueidentifier default newid(),
    WasteOfSpace2 datetime2 default sysdatetime(),
    WasteOfSpace3 char(8000)
)
GO
INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)
VALUES ('Geniiius - i-catching solutions')
GO
INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)
SELECT WasteOfSpace3 FROM MakeMyLogFileGoCrazy
GO 17

Now let’s see the number of virtual log files:

DBCC LOGINFO

image

 

The number of rows returned is now 259, which is not that bad. If you look at the FileSize column, you can see that the sizes of the virtual log files in the bottom of the view, is slightly bigger than the first once. This is because of the growth in percent.

 

You hit the bug if you set your log file to auto grow with a multipla of 4GB, and that was exactly the autogrow size I ran into the other day.

Now let os try to see what happens, if we reset the test setup, but make the logfile grow with 4GB instead of the default 10%:

USE [master]
GO
IF EXISTS (SELECT * FROM sys.databases where name = 'VLFHell')
BEGIN
    ALTER DATABASE [VLFHell] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE VLFHell
END
GO
CREATE DATABASE [VLFHell] ON PRIMARY
(
    NAME = N'VLFHell',
    FILENAME = N'C:\SQLData\VLFHell.mdf',
    SIZE = 5MB,
    FILEGROWTH = 1MB
)
 LOG ON
(
    NAME = N'VLFHell_log',
    FILENAME = N'C:\SQLData\VLFHell_log.LDF' ,
    SIZE = 1MB ,
    FILEGROWTH = 4GB --Notice the 4GB filegrowth
)
GO
BACKUP DATABASE [VLFHell] to DISK='C:\SQLData\VLFHell.Bak' WITH COMPRESSION, INIT
GO
USE [VLFHell]
GO
CREATE TABLE MakeMyLogFileGoCrazy
(
    id bigint identity(1,1),
    WasteOfSpace1 uniqueidentifier default newid(),
    WasteOfSpace2 datetime2 default sysdatetime(),
    WasteOfSpace3 char(8000)
)
GO
INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)
VALUES ('Geniiius - i-catching solutions')
GO
INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)
SELECT WasteOfSpace3 FROM MakeMyLogFileGoCrazy
GO 17
DBCC LOGINFO

And the results from the dbcc command:

image

Now the number of VLF’s are 4933. The table generated is only around 1GB of data, so imagine what would happen if you where loading hundreds of gigabytes of data into it? Also notice the size of each added VLF. Now they are only 248KB, which causes physical fragmentation, as well as overhead when analysing the logfile in the process of bringing the DB online.

The case I stumpled upon recentlty, had hundred of thousands of VLF’s, and the database took hours to be brought online… not the most ideal situation!

If you have a database that stays a bit too long in the “in recovery” state after a reboot, I would certainly recommend looking at the DBCC LOGINFO command. If you have thousands of VLF’s, the solution is quite simple. Make sure to backup the log, shrink the log file, and grow in resonable sizes not equal to any multipla of 4GB:

BACKUP LOG [VLFHell] TO DISK = 'G:\Data\VLFHell.trn'
GO
CHECKPOINT
GO
DBCC SHRINKFILE (N'VLFHell_log' , 1)
GO
ALTER DATABASE [VLFHell]
MODIFY FILE ( NAME = N'VLFHell_log', SIZE = 2GB )
GO
DBCC LOGINFO
GO

Perhaps you need to backup the log a few times before being able to shink the file completely. But now the number of virtual logfiles is reduced to just 20:

image

 

Conclusion

If you for some reason have configured your logfile to grow in 4GB multiplas, you could suffer from this bug, which could lead to long recovery times after a reboot. The bug is confirmed in both SQL Server 2005, SQL Server 2008, SQL Server 2008 SP1 and SQL Server 2008 R2. In Denali it is however not an issue, so the bug has been fixed there.

@HenrikSjang

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating