Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 

Autogrow bug – fixed with Denali

Recently I ran into a bug related to autogrow of the log file. Because this was the second time I encounted this particular bug, I decided to blog about it.

The symptoms of the problem was the one specific database took a very long time to be brought online after a simple service restart. The database was simply staying in the “in recovery” state for a long time. The reason for this was a very high number of Virtual Log Files in the database. Let me set up a demo for you, so you can see what this is all about:

USE master
GO

CREATE DATABASE VLFHell
GO

You can see how many VLF’s are in your database by running this:

DBCC LOGINFO

image

 

This shows, that there are a total of 5 virtual logfiles within the logfile of the new database, and you can see the size of each virtual logfile. We will now take a full backup of the database, to make the recovery model work as full (until the first full backup, the logfile will work as if it was in simple mode)

The default configuration for a SQL Server is, that the logfile will autogrow by 10%, so let’s see how this will behave if we generate some dummy data:

USE [VLFHell]
GO

CREATE TABLE MakeMyLogFileGoCrazy
(
    id bigint identity(1,1),
    WasteOfSpace1 uniqueidentifier default newid(),
    WasteOfSpace2 datetime2 default sysdatetime(),
    WasteOfSpace3 char(8000)
)
GO

INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)
VALUES ('Geniiius - i-catching solutions')
GO

INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)
SELECT WasteOfSpace3 FROM MakeMyLogFileGoCrazy
GO 17

Now let’s see the number of virtual log files:

DBCC LOGINFO

image

 

The number of rows returned is now 259, which is not that bad. If you look at the FileSize column, you can see that the sizes of the virtual log files in the bottom of the view, is slightly bigger than the first once. This is because of the growth in percent.

 

You hit the bug if you set your log file to auto grow with a multipla of 4GB, and that was exactly the autogrow size I ran into the other day.

Now let os try to see what happens, if we reset the test setup, but make the logfile grow with 4GB instead of the default 10%:

USE [master]
GO

IF EXISTS (SELECT * FROM sys.databases where name = 'VLFHell')
BEGIN
    ALTER DATABASE [VLFHell] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE VLFHell
END
GO

CREATE DATABASE [VLFHell] ON PRIMARY
(
    NAME = N'VLFHell',
    FILENAME = N'C:\SQLData\VLFHell.mdf',
    SIZE = 5MB,
    FILEGROWTH = 1MB
)
 LOG ON
(
    NAME = N'VLFHell_log',
    FILENAME = N'C:\SQLData\VLFHell_log.LDF' ,
    SIZE = 1MB ,
    FILEGROWTH = 4GB --Notice the 4GB filegrowth
)
GO

BACKUP DATABASE [VLFHell] to DISK='C:\SQLData\VLFHell.Bak' WITH COMPRESSION, INIT
GO

USE [VLFHell]
GO

CREATE TABLE MakeMyLogFileGoCrazy
(
    id bigint identity(1,1),
    WasteOfSpace1 uniqueidentifier default newid(),
    WasteOfSpace2 datetime2 default sysdatetime(),
    WasteOfSpace3 char(8000)
)
GO

INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)
VALUES ('Geniiius - i-catching solutions')
GO

INSERT INTO MakeMyLogFileGoCrazy (WasteOfSpace3)
SELECT WasteOfSpace3 FROM MakeMyLogFileGoCrazy
GO 17

DBCC LOGINFO

And the results from the dbcc command:

image

Now the number of VLF’s are 4933. The table generated is only around 1GB of data, so imagine what would happen if you where loading hundreds of gigabytes of data into it? Also notice the size of each added VLF. Now they are only 248KB, which causes physical fragmentation, as well as overhead when analysing the logfile in the process of bringing the DB online.

The case I stumpled upon recentlty, had hundred of thousands of VLF’s, and the database took hours to be brought online… not the most ideal situation!

If you have a database that stays a bit too long in the “in recovery” state after a reboot, I would certainly recommend looking at the DBCC LOGINFO command. If you have thousands of VLF’s, the solution is quite simple. Make sure to backup the log, shrink the log file, and grow in resonable sizes not equal to any multipla of 4GB:

BACKUP LOG [VLFHell] TO DISK = 'G:\Data\VLFHell.trn'
GO

CHECKPOINT
GO

DBCC SHRINKFILE (N'VLFHell_log' , 1)
GO

ALTER DATABASE [VLFHell]
MODIFY FILE ( NAME = N'VLFHell_log', SIZE = 2GB )
GO

DBCC LOGINFO
GO

Perhaps you need to backup the log a few times before being able to shink the file completely. But now the number of virtual logfiles is reduced to just 20:

image

 

Conclusion

If you for some reason have configured your logfile to grow in 4GB multiplas, you could suffer from this bug, which could lead to long recovery times after a reboot. The bug is confirmed in both SQL Server 2005, SQL Server 2008, SQL Server 2008 SP1 and SQL Server 2008 R2. In Denali it is however not an issue, so the bug has been fixed there.

@HenrikSjang

Comments

Posted by Anonymous on 23 August 2011

Pingback from  Dew Drop–August 23, 2011 | Alvin Ashcraft's Morning Dew

Posted by Steve Jones on 23 August 2011

Glad this is fixed. It will be with us for years, but at least it will slowly fade away.

Posted by yusufanis on 27 August 2011

Wasnt aware of this earlier thought the 8 GB one was the sole bug related to growth. But when I tried this on my SQL 9.0.5057 things turned out to be different.

Here is all

New  Database after Full backup in Full recovery

--use master; drop database DbLog; create database DbLog; use DbLog

--BACKUP DATABASE [DbLog] TO  DISK = N'E:\DB\MSSQL.5\MSSQL\Backup\DbLog2.bak' WITH NOFORMAT, NOINIT,  NAME = N'DbLog-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10

name Size_Kb Used_Kb filename fileid groupid

DbLog 2048 1408 E:\DB\MSSQL.5\MSSQL\DATA\DbLog.mdf 1 1

DbLog_log 512 240 E:\DB\MSSQL.5\MSSQL\DATA\DbLog_log.LDF 2 0

FileId FileSize StartOffset FSeqNo Status Parity CreateLSN

2 253952 8192 1085 2 64 0

2 262144 262144 0 0 0 0

Table Created

--create table TbData (nam varchar(50), age smallint); select * from TbData

name Size_Kb Used_Kb filename fileid groupid

DbLog 2048 1408 E:\DB\MSSQL.5\MSSQL\DATA\DbLog.mdf 1 1

DbLog_log 512 280 E:\DB\MSSQL.5\MSSQL\DATA\DbLog_log.LDF 2 0

FileId FileSize StartOffset FSeqNo Status Parity CreateLSN

2 253952 8192 1085 2 64 0

2 262144 262144 0 0 0 0

Log modified to grow at 4 GB

--USE [master]; ALTER DATABASE [DbLog] MODIFY FILE ( NAME = N'DbLog_log', FILEGROWTH = 4194304KB); use DbLog

name Size_Kb Used_Kb filename fileid groupid

DbLog 2048 1408 E:\DB\MSSQL.5\MSSQL\DATA\DbLog.mdf 1 1

DbLog_log 512 280 E:\DB\MSSQL.5\MSSQL\DATA\DbLog_log.LDF 2 0

FileId FileSize StartOffset FSeqNo Status Parity CreateLSN

2 253952 8192 1085 2 64 0

2 262144 262144 0 0 0 0

Inserted 1000 Rows

name Size_Kb Used_Kb filename fileid groupid

DbLog 2048 1408 E:\DB\MSSQL.5\MSSQL\DATA\DbLog.mdf 1 1

DbLog_log 760 736 E:\DB\MSSQL.5\MSSQL\DATA\DbLog_log.LDF 2 0

FileId FileSize StartOffset FSeqNo Status Parity CreateLSN

2 253952 8192 1085 2 64 0

2 262144 262144 1086 2 64 0

2 253952 524288 1087 2 64 1086000000018300003

Posted by yusufanis on 27 August 2011

Things as I see after post are quite jumbled.

Let me know if I can make that more readable someway.

Posted by yusufanis on 27 August 2011

Here are the statements I used

New  Database after Full backup in Full recovery

--use master; drop database DbLog; create database DbLog; use DbLog

--BACKUP DATABASE [DbLog] TO  DISK = N'E:\DB\MSSQL.5\MSSQL\Backup\DbLog2.bak' WITH NOFORMAT, NOINIT,  NAME = N'DbLog-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10

******************

Table Created

--create table TbData (nam varchar(50), age smallint); select * from TbData

******************

Log modified to grow at 4 GB

--USE [master]; ALTER DATABASE [DbLog] MODIFY FILE ( NAME = N'DbLog_log', FILEGROWTH = 4194304KB); use DbLog

******************

Inserted 1000 Rows

******************

For all the stars ****************** this was executed

select name, size*8 'Size_Kb',

fileproperty(name,'spaceused')*8 'Used_Kb',

filename, fileid,groupid from sysfiles

go

dbcc loginfo

Posted by Andrew Kelly on 28 August 2011

Just wanted to point out that this is not just an Autogrow issue. If you grow the log file manually or programatically it behaves the same way.

Posted by Geniiius on 29 August 2011

Hi, Andrew,

Thanx for pointing that out!

Posted by Simon on 4 September 2011

You've got the issue here completely wrong because a flat 4GB auto-growth does not cause the bug. If you explicitly set the auto-growth manually to be 4GB as in 4000MB, then this bug would not have occur.

The bug is explicitly to do with auto-growth of any size of 1,024 chunk auto-growth. When you use code to define your auto-growth, it'll set the auto-growth to 4,096MB which causes the issue.

I've got the piece of code from M$ which they explicity describes the bug where the VLFs grew by the MIN_LOG_FILE_SIZE. If you're interested, let me know and I dont mind posting the SQL engine code that causes this issue.

Just as a FYI – this problem is present in SQL Server 2005 and SQL Server 2008 but not SQL Server 2000.

Simon-413722

Posted by johntam on 7 May 2013

4GB=4096MB.  4000MB = 3.906GB.

I just got hit with VLF hell in my transaction logs for defining a 60GB auto growth.

And now I have changed this to a non-even multiple of 4GB.

Leave a Comment

Please register or log in to leave a comment.