DB log growth issue

  • Hi all,

    One of my DB is about 400GB and setup with transaction replication

    Every week I'm running the Maintenance job to reorg the Indexes and Update the stats, the job finishes fine but the log grows to almost same size or sometime more than current size

    I need to shrink the log file to fix the issue, Please let me know how to overcome this Log file issue

    Thanks

    Dave

  • Dave

    The problem with a maintenance plan is you don't have much control over what it does. What you want to do is only rebuild the indexes that need it. There are plenty of scripts and tools out there that can do that for you, including Ola Hallengren's comprehensive and universally acclaimed system, which is free to use. Another thing you can try is to increase the frequency of your log backups while index maintenance is running, but the log will still grow to at least the size of the largest index that you rebuild.

    John

  • Hope this helps a bit.

    http://sqlserverperformance.wordpress.com/2010/06/22/detecting-and-reducing-vlfs-in-sql-server-20082008-r2-transaction-log-files/

    http://www.sqlservercentral.com/articles/Administration/64582/

    Step 1 == > Check VLF Count using DBCC LogInfo

    Step 2 ==> Backup the Transaction Log

    Step 3 == > Shrink Transaction Log to least minimum size using TRUNCATEONLY option

    Step 4 == > Check VLF Count using DBCC LogInfo

    Step 5 == > Grow the log file back to the desired size, you might want to increase transaction log in 8GB batches, read start of article

    http://www.sqlserver-training.com/vlf-virtual-log-file-sql-server-performance/-

    Step 6 == > Check VLF Count using DBCC LogInfo

    Step 7 == > Set transaction log backups to occur regularly depending on your situation, but the more often the less likely it will grow. Maybe every 15mins.

    Step 8 == > Modify reindex job to backup transaction log just prior to reindex.

    Create or find a smart index defrag job that will reorganize if fragmentation is between 10% and 30%

    or reindex if fragmentation is over 30% to minimize the load on the transaction log.

  • I'm using this code which is a hodge podge of scripts I've found and my own twists.

    Maybe you can modify it for your needs.

    DECLARE @command VARCHAR(4000)

    DECLARE @schemaname VARCHAR(130)

    DECLARE @objectname VARCHAR(130)

    DECLARE @indexname VARCHAR(130)

    /*REORGANIZE OBJECTS*/

    DECLARE @iorg int, @maxOrg int

    DECLARE @reorganize TABLE(

    rowId int IDENTITY(1,1),

    schemaname VARCHAR(150),

    objectname VARCHAR(150),

    indexname VARCHAR(130)

    )

    /*REBUILD OBJECTS*/

    DECLARE @iBuild int, @maxBuild int

    DECLARE @rebuild TABLE(

    rowId int IDENTITY(1,1),

    schemaname VARCHAR(150),

    objectname VARCHAR(150),

    indexname VARCHAR(130)

    )

    /***************************************

    **REORGANIZE

    ***************************************/

    INSERT INTO @reorganize (indexname, schemaname, objectname)

    SELECT i.name, s.name, o.name --,ips.avg_fragmentation_in_percent, page_count

    FROM sys.objects o

    left outer join sys.schemas s on o.schema_id = s.schema_id

    left outer join sys.indexes i on o.object_id = i.object_id

    left outer join sys.dm_db_index_physical_stats(db_id(@databasename), NULL,NULL,NULL,'LIMITED') AS ips

    on i.object_id = ips.object_id and i.index_id= ips.index_id

    WHERE o.[type] = 'U' --Is a User Table

    and page_count >30 --Is big enough to make a difference

    and i.index_id > 0 --is not a heap

    and avg_fragmentation_in_percent between 10 and 30 --%

    and index_type_desc NOT LIKE '%XML%' --if you dont have this type in your database

    SELECT @iOrg = min(rowId), @maxOrg = max(rowId) from @reorganize

    WHILE @iOrg <= @maxOrg

    BEGIN

    SELECT @indexname = indexname, @schemaname = schemaname, @objectname = objectname

    FROM @reorganize where rowId = @iOrg

    SET @command = 'ALTER INDEX ' + @indexname + ' ON ' +@database + '.' +@schemaname + '.' + @objectname + ' REORGANIZE';

    EXEC (@command)

    SET @iOrg = @iOrg + 1

    END

    /***************************************

    **REBUILD

    ***************************************/

    INSERT INTO @rebuild(indexname, schemaname, objectname)

    SELECT i.name, s.name, o.name --,ips.avg_fragmentation_in_percent, page_count

    FROM sys.objects o

    left outer join sys.schemas s on o.schema_id = s.schema_id

    left outer join sys.indexes i on o.object_id = i.object_id

    left outer join sys.dm_db_index_physical_stats(db_id(@databasename), NULL,NULL,NULL,'LIMITED') AS ips on i.object_id = ips.object_id and i.index_id= ips.index_id

    WHERE o.[type] = 'U' --Is a User Table

    and page_count > 30 --Is big enough to make a difference

    and i.index_id > 0 --is not a heap

    and avg_fragmentation_in_percent > 30 --%

    and index_type_desc NOT LIKE '%XML%' --if you dont have this type in your database

    SELECT @iBuild = min(rowId), @maxBuild = max(rowId) from @rebuild

    WHILE @iBuild <= @maxBuild

    BEGIN

    SELECT @indexname = indexname, @schemaname = schemaname, @objectname = objectname

    FROM @rebuild where rowId = @iBuild

    SET @command = 'ALTER INDEX ' + @indexname + ' ON ' +@database + '.' +@schemaname + '.' + @objectname + ' REBUILD WITH(ONLINE

    = ON, SORT_IN_TEMPDB = ON ';

    EXEC (@command)

    SET @iBuild = @iBuild + 1

    END

    RETURN

  • My next Maintenance plan is schedule to run in two weeks so I will review the links, articles and scripts you all provided and hopefully come up with a solution before next maintenance plan

    Thanks all

    Happy Thanks Giving 🙂

  • Dave, what problem are you trying to fix? What you are describing is normal. Unless you are running out of disk space, there is not a problem here.

    Delizat, your post is full of some mis-information, and I sure don't understand why you are doing some of these things!

    delizat (11/25/2014)


    Hope this helps a bit.

    http://sqlserverperformance.wordpress.com/2010/06/22/detecting-and-reducing-vlfs-in-sql-server-20082008-r2-transaction-log-files/

    http://www.sqlservercentral.com/articles/Administration/64582/

    Step 1 == > Check VLF Count using DBCC LogInfo

    Step 2 ==> Backup the Transaction Log

    Step 3 == > Shrink Transaction Log to least minimum size using TRUNCATEONLY option

    Step 4 == > Check VLF Count using DBCC LogInfo

    Step 5 == > Grow the log file back to the desired size, you might want to increase transaction log in 8GB batches, read start of article

    http://www.sqlserver-training.com/vlf-virtual-log-file-sql-server-performance/-

    Really? Backup, Shrink, Grow?? Does this make sense? Grow the file to a size that is needed, and leave it there.

    As an example one of our production databases has a data file size of 180GB, and a log file size of 60GB. These were set when we upgraded to SQL 2012 almost 12 months ago.

    The VLF count hasn't changed. The file has had a total of 2 auto growths.

    The link you referenced was very good, but it was based upon this posting by Kimberly Tripp.

    http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

    Step 7 == > Set transaction log backups to occur regularly depending on your situation, but the more often the less likely it will grow. Maybe every 15mins.

    What's the primary reason to take a log backup? Keeping the size in check is a secondary benefit, but the ability to perform a point-in-time restore is the primary reason.

    To summarize, set the file size to what is required. Free space is not a bad thing. It's normal.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Michael L John

    I should have been more specific that this is a one time process for wrangling in unmanaged transaction log VLF's. Still, don't call it "Mis-information" unless you are sure.

    You can see another post of Kimberly Tripps describing the same techniques at:

    http://www.sqlskills.com/blogs/kimberly/8-steps-to-better-transaction-log-throughput/

    or

    http://sqlblogcasts.com/blogs/tonyrogerson/archive/2007/07/25/sql-2000-yes-lots-of-vlf-s-are-bad-improve-the-performance-of-your-triggers-and-log-backups-on-2000.aspx

    Since the poster has a transaction log that has grown out of control I've shown how to get Log fragmentation and VLFs back under control and prevent it from happening again in a very short blog post for such a complicated subject.

  • Thanks John,

    Yes, part of the problem is the disk always get full, but I still think there other issues also as I'm running the maintenance plan on so many other servers and have no DB log issue, the db size is bigger then this one (400GB)

    So I need to control this area Log file

    I do have the option to move the log file on the Bigger SAN drive but It's much better I found the main issue

    So I will review your link also

    Regards.

  • If this is a one time process to get things back in control, Ok, my apologies.

    But, as I read your post, it seemed as if it was your normal process.

    So, back to the original point. Dave, are you trying to fix something that is normal?

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • qew420 (11/25/2014)


    My next Maintenance plan is schedule to run in two weeks so I will review the links, articles and scripts you all provided and hopefully come up with a solution before next maintenance plan

    Thanks all

    Happy Thanks Giving 🙂

    What is the backup plan that you have with this?

    Also, I agree with the others... using a built in maintenance plan will defrag all of the indexes... whether they need it or not. Most indexes whose first column is based on a column from the table that is ever increasing and immutable will almost never get fragmented just in normal use. That could cover the clustered indexes (which is the table data itself) of some pretty large tables.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • It's Daily

  • I've got a great reindex routine I've written and it's free. Tons of functionality, and very easy to get up and running.

    Just click the little green guy below.

    Watch my free SQL Server Tutorials at:
    http://MidnightDBA.com
    Blog Author of:
    DBA Rant – http://www.MidnightDBA.com/DBARant

    Minion Maintenance is FREE:

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply