Shrinking a transaction log file

  • Hello,

           Supposed I have a transaction log file which has grown to 5 GB in size. I backed it up to disk and the used size of the log became 1 MB. There should be 4.99 GB free. I tried to shrink the log file by "truncating free space from the log file". But the log file size did not decrease (or it decreased insignificantly, like shrinking by 1 MB).

            Sometimes it works and sometimes it does not reduce the file size at all. The database is not used for replication or log-shipping or mirroring. Do you know what can affect the shrink log action?

             Also, is there a way (dm view, system stored procedure, etc ...) to see which process (spid) is actively "holding" the transaction log? Like it has an open transaction which is consuming transaction log space continuously.

    Thanks,

    Del piero

     

  • I normally handle this with a scheduled dbcc shrinkfile. Set the target size to 1(mb) and SQL will squeeze it down as tight as possible.

    I don't recommend this for mdf files as the data file will have to auto-grow as soon as anything is done in the database, which can take a considerable amount of time for large mdf files.

  • I have used this script on a 176GB file (Database & Log file) which reduced to 100MB in approx 20 minutes, after backup. Just remember to put in the name of the logical database and Transaction log. I think you'll be impressed. I cannot hold credit for this script I just managed to find it and it has fixed many of the databases I found when others couldn't do anything with it.

    Let me know how you have got on with this.

    /*

    Shrink a named transaction log file belonging to a database

    Originally found at;

    http://support.microsoft.com/support/kb/articles/q256/6/50.asp

    Changes:

    28.08.2001

    Modified the inner loop so it tested the dx time so long overruns did not happen

    Modified the inner loop so it had a fixed minimum quantity so there was no skip in skip out

    29.08.2001

    Modified the inner loop so it had a dynamic minimum quantity to allow faster shrinkage

    24.01.2002

    Modified the USE statement so it uses brackets around the dbname

    Modified the @TruncLog variable so it uses brackets around the dbname

    31.05.2002

    Modified the code to use PRINT instead of SELECT in several cases

    Modified the code to use @MaxCount instead of two unclear rules

    Modified the code to use @Factor instead of several hard-coded values

    Commented the use of @Factor

    Moved the configuration and @Counter init code to before the start of the first loop to avoid repetition

    Modified the code to display the process runtime in seconds rather than minutes

    */

    SET NOCOUNT ON

    DECLARE @LogicalFileName SYSNAME,

    @MaxMinutes INT,

    @NewSize INT,

    @Factor FLOAT

    /*

    The process has several control parameters, most of the time you only need to worry about the first four

    as these are the big controls whereas the fifth is simply a fine tuning control which rarely needs to

    come into play.

    */

    --This is the name of the database for which the log will be shrunk.

    USE [databasename]

    --Use sp_helpfile to identify the logical file name that you want to shrink.

    SET @LogicalFileName = 'database_Log';

    --Limit on time allowed to wrap log in minutes

    SET @MaxMinutes = 5;

    --Ideal size of logfile in MB

    SET @NewSize =100;

    /*

    Factor determining maximum number of pages to pad out based on the original number of pages in use

    (single page = 8K). Values in the range 1.0 - 0.8 seems to work well for many databases.

    Increasing the number will increase the maximum number of pages allowed to be padded, which should

    force larger amounts of data to be dropped before the process finishes. Often speeds up shrinking

    very large databases which are going through the process before the timer runs out.

    Decreasing the number will decrease the maximum number of pages allowed to be padded, which should

    force less work to be done. Often aids with forcing smaller databases to shrink to minimum size

    when larger values were actually expanding them.

    */

    SET @Factor = 1.0;

    /*

    All code after this point is driven by these parameters and will not require editing unless you need to

    fix a bug in the padding/shrinking process itself.

    */

    -- Setup / initialize

    DECLARE @OriginalSize INT,

    @StringData VARCHAR(500)

    SELECT @OriginalSize = size -- in 8K pages

    FROM sysfiles

    WHERE name = @LogicalFileName;

    SELECT @StringData = 'Original Size of ' + db_name() + ' LOG is ' +

    CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' +

    CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'

    FROM sysfiles

    WHERE name = @LogicalFileName;

    PRINT @StringData;

    PRINT ''

    --Drop the temporary table if it already exists

    IF ( OBJECT_ID('[dbo].[DummyTrans]') IS NOT NULL )

    DROP TABLE [DummyTrans]

    CREATE TABLE [DummyTrans]( [DummyColumn] CHAR(8000) NOT NULL );

    -- Wrap log and truncate it.

    DECLARE @Counter INT,

    @MaxCount INT,

    @StartTime DATETIME,

    @TruncLog VARCHAR(500)

    -- Try an initial shrink.

    DBCC SHRINKFILE (@LogicalFileName, @NewSize)

    SET @TruncLog = 'BACKUP LOG [' + db_name() + '] WITH TRUNCATE_ONLY';

    EXEC (@TruncLog)

    -- Configure limiter

    IF @OriginalSize / @Factor > 50000

    SET @MaxCount = 50000

    ELSE

    SET @MaxCount = @OriginalSize * @Factor

    -- Attempt to shrink down the log file

    PRINT 'Minimum Quantity : '+CAST( @MaxCount AS VARCHAR(10) )

    PRINT 'Maximum Time : '+CAST( @MaxMinutes AS VARCHAR(10) )+' minutes ('+CAST( @MaxMinutes*60 AS VARCHAR(10) )+' seconds)'

    PRINT ''

    SET @Counter = 0;

    SET @StartTime = GETDATE();

    --loop the padding code to reduce the log while

    -- within time limit and

    -- log has not been shrunk enough

    WHILE (

    (@MaxMinutes*60 > DATEDIFF(ss, @StartTime, GETDATE())) AND

    (@OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName)) AND

    ((@OriginalSize * 8 / 1024) > @NewSize)

    )

    BEGIN --Outer loop.

    --pad out the logfile a page at a time while

    -- number of pages padded does not exceed our maximum page padding limit

    -- within time limit and

    -- log has not been shrunk enough

    WHILE (

    (@Counter DATEDIFF(ss, @StartTime, GETDATE())) AND

    (@OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName)) AND

    ((@OriginalSize * 8 / 1024) > @NewSize)

    )

    BEGIN --Inner loop

    INSERT INTO DummyTrans VALUES ('Fill Log') -- Because it is a char field it inserts 8000 bytes.

    DELETE FROM DummyTrans

    SELECT @Counter = @Counter + 1

    --Every 1,000 cycles tell the user what is going on

    IF ROUND( @Counter , -3 ) = @Counter

    BEGIN

    PRINT 'Padded '+LTRIM( CAST( @Counter*8 AS VARCHAR(10) ) )+'K @ '+LTRIM( CAST( DATEDIFF( ss, @StartTime, GETDATE() ) AS VARCHAR(10) ) )+' seconds';

    END

    END

    --See if a trunc of the log shrinks it.

    EXEC( @TruncLog )

    END

    PRINT ''

    SELECT @StringData = 'Final Size of ' + db_name() + ' LOG is ' +

    CONVERT(VARCHAR(30),size) + ' 8K pages or ' +

    CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'

    FROM sysfiles

    WHERE name = @LogicalFileName;

    PRINT @StringData

    PRINT ''

    DROP TABLE DummyTrans;

    PRINT '*** Perform a full database backup ***'

    SET NOCOUNT OFF

  • In your script, the log is truncated. Is there any way to guarantee shrinking a transaction log file without truncating it? I just backed up the log, but i did not want to truncate it.

    Thanks,

    Del piero

     

  • You can use the DBCC shrinkfile command to shrink the transaction log.

    Gethyn Elliswww.gethynellis.com

  • If it's not a production db and just a test db where the log has grown because you don't have a maintenance plan setup to back it up regularly then you could try this.

    BACKUP LOG <databasename> WITH TRUNCATE_ONLY

    This will NOT actually backup the log but will clear it out so you can shrink it. Do a full backup right after this as this will make any prevous incremental / log backups not restorable.

  • there is a way to shrink the log without truncating (busting LSN's). Easy enough, but can be done manually.

    Depends on how active your database is and how soon the transactions close (commit/rollback). when your log is big, open transactions record the data at the end of the file or close to it and if your shrink file with free unused space did not work it means only one thing - at the end of the file there are pages marked as used by transaction log. your way to free it - backup. so easy sequence of backup transaction log, then shrink file with free unused space only will get you there quick enough, just have to shrink after the log backup. Repeat several times, you'll get your log where it needs to be and LSN's will be how they supposed to be for restore.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply