Shrinking log file

  • When I try to shrink the log file for my databases I am getting following message.

    Cannot shrink log file 2 (cm_archive_mp_Log) because all logical log files are in use.

    what should I do to successfully shrink the log file.

    Thanks.

  • Depends on what recovery mode your database is in.

    If it is in simple recovery mode... run

    BACKUP LOG databasename WITH TRUNCATE_ONLY

    GO

    CHECKPOINT

    GO

    You can run this twice to make sure then you can shrink the the log file. Use the DBCC SHRINKFILE command for this, check the correct sintax on SQL Books online.

    If you database is in full recovery mode...

    Make a backup of the database, then set the recovery mode to simple. Then run the commands above, afterwards set you recovery mode back to full and run a backup. REMEMBER NOT!!! TO OVERWRITE THE FIRST BACKUP until you have verified that everythin is in order, even keep it a while for... backup.

    Lukas Botha

    Lukas Botha

  • If you're in Simple Recovery Mode, a backup statement is not necessary.  CHECKPOINT will commit transactions to the database, and truncate.

    If you are in Full recovery mode, I would recommend that you do NOT change to Simple Recovery mode for this process.  It will destroy point in time recoverability, and it is not necessary, or even advantageous.  Also, if you are in full recovery mode, do NOT use the "WITH TRUNCATE_ONLY" clause, as this will have the same results as changing to simple.  Instead, simply run a normal transaction log backup, which will back up your transactions and truncate the log.  Then you can try the shrink again.  You MAY get the message saying that all logical files are in use again.   If so, you can simply try again later, or there are procedures available to "re-arrange" your transaction log to shift all transactions forward.  I have one if you are interested.

    Steve

  • Hi Steve

    As a matter of interest have you ever tried getting a log file shrunk?

    It would also help if you READ the post...

    Full recovery model.

    STEP 1: BACKUP Database

    STEP 2: set to simple

    STEP 3: shrink log file

    STEP 4: set to full recovery

    STEP 5: BACKUP DATABASE

    I have had a number of problems with databases where we run full recovery and during a reindex the log file grew to 125% the size of the database. The quickest and easiest was the above mentioned process.

    Lukas Botha

  • Lukas,

    Yes, I have shrunk log files a number of times, and have never found it necessary to change recovery models to achieve it.  I did read the post and I did see that you recommended a backup afterwards which will reset the recoverability.  However, the only thing you are accomplishing by setting the recovery model to simple is truncating the log.  A transaction log backup will accomplish that without compromising recoverability.  The bottom line is that it is not necessary to change the recovery model to simple in order to shrink the log. 

    Its obvious that I stepped on your toes with my post.  That was not my intention.  I just felt that the advice given was not optimal and wanted to clear things up a bit.  Believe me, there have been times which I have posted something and been corrected by other members.

    I apologize if I have offended you.  That was not my intention.

    Steve

  • Hi All,

    Please excuse my ignorance.  It's probably staring me in the face; I've read through BOL on backups, but I cannot find out where to change the recovery mode from full to simple or back again.  What am I missing?  SQL7 and SQL2000.

     

    Thanks.

    -gol

  • For jerryol:

    From BOL under ALTER DATABASE

    ALTER DATABASE database

    { ADD FILE < filespec > [ ,...n ] [ TO FILEGROUP filegroup_name ]

    | ADD LOG FILE < filespec > [ ,...n ]

    | REMOVE FILE logical_file_name

    | ADD FILEGROUP filegroup_name

    | REMOVE FILEGROUP filegroup_name

    | MODIFY FILE < filespec >

    | MODIFY NAME = new_dbname

    | MODIFY FILEGROUP filegroup_name {filegroup_property | NAME = new_filegroup_name }

    | SET < optionspec > [ ,...n ] [ WITH < termination > ]

    | COLLATE < collation_name >

    }

    Expand or collapse text< optionspec > ::=

          < recovery_option > ::=

            RECOVERY { FULL | BULK_LOGGED | SIMPLE }

            | TORN_PAGE_DETECTION { ON | OFF }

     

    Lukas Botha

  • Sorry Steve I had a hard day yesterday, You must be one of the lucky ones.

    More than once I have had problems with databases that are in full recovery that the log files do not whant to shrink no matter what you try until you 'reset' their recovery. This has remained an enigma but the sollution that I suggested has resolved the log file shrinkage in all the cases that I have had.

    I agree with you completely, the way that you suggest is the recommended way, this has not always been the successfull in my case. But this should solve the problem in more than 80% of the cases.

    I appologise again.

    Lukas

    Lukas Botha

  • Here's a stored procedure that I got here at SQLServerCentral.com.  I made some changes to it, which are identified with comments.  This thing works great for those stubborn logs.  Note that it does use the truncate_only option by default, and since it can iterate a number of times, you probably wouldn't want to change that....

    There will be some winks, , show up in the code.  Just ignore them.

     

    use master

    go

    if object_id( 'sp_force_shrink_log' ) is not null drop proc sp_force_shrink_log

    go

    create proc sp_force_shrink_log

    /*

    *************************************************************

    Name: sp_force_shrink_log

    Description:

      Shrink transaction log of the current database in SQL Server 7.0.

    Switch context to proper db to execute.

    Usage: exec sp_force_shrink_log <target_percent>, <target MB>, <iterations>, <backup options>

       exec pubs..sp_force_shrink_log

    Author:  Andrew Zanevsky, AZ Databases, Inc., 12/25/1999, v5 - 08/18/2000

       zanevsky@azdatabases.com

    Input Params:

    -------------

    @target_percent  tinyint. default = 0. Target percentage of remaining shrinkable

          space. Defaults to max possible.

    @target_size_MB  int. default = 10. Target size of final log in MB.

    @max_iterations  int. default = 1000. Number of loops (max) to run proc through.

    @backup_log_opt  nvarchar(1000). default = 'with truncate_only'. Backup options.

    Output Params:

    --------------

    Return:

    Results:

    ---------

    Locals:

    --------

    @err  Holds error value

    Modifications:

    --------------

    3/2004 - Steve Phelps - made a number of small changes, all commented with the initials SBP

    *************************************************************

    */

        @target_percent tinyint = 0,

        @target_size_MB int = 0,

        @max_iterations int = 1000,

        @backup_log_opt nvarchar(1000) = 'with truncate_only'

    as

    --  test stuff SBP

    -- use PUBS

    -- GO

    --

    -- declare

    --     @target_percent tinyint ,

    --     @target_size_MB int ,

    --     @max_iterations int ,

    --     @backup_log_opt nvarchar(1000)

    --

    -- set    @target_percent  = 0

    -- set    @target_size_MB  = 0

    -- set    @max_iterations = 1000

    -- set    @backup_log_opt  = 'with truncate_only'

    --

    -- set    @target_size_MB  = 20

    -- end test stuff

    set nocount on

    declare

     @db         sysname,

     @last_row   int,

     @log_size   decimal(15,2),

     @unused1    decimal(15,2),

     @unused     decimal(15,2),

     @shrinkable decimal(15,2),

     @iteration  int,

     @file_max   int,

     @file     int,

     @fileid     varchar(5),

     @prev_max_iterations int,

     @command varchar(500)

    select  @db = db_name(),

            @iteration = 0,

        @prev_max_iterations = 2^31-1  -- SQL 7.0 max value for int data type, will be reset within the loop SBP

    create table #loginfo

     (  id          int identity,

        FileId      int,

        FileSize    numeric(22,0),

        StartOffset numeric(22,0),

        FSeqNo      int,

        Status      int,

        Parity      smallint,

        TimeorLSN  varchar(25)&nbsp

    create unique clustered index loginfo_FSeqNo on #loginfo ( FSeqNo, StartOffset )

    create table #logfiles

     ( id int identity(1,1),

      fileid varchar(5) not null&nbsp

    insert #logfiles ( fileid ) select convert( varchar, fileid ) from sysfiles where status & 0x40 = 0x40       

    select @file_max = @@rowcount

    if object_id( 'table_to_force_shrink_log' ) is null

     exec( 'create table table_to_force_shrink_log ( x nchar(3000) not null )' )

    insert  #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, TimeorLSN ) exec ( 'dbcc loginfo' )

    select  @last_row = @@rowcount

    print 'Max iterations = ' + rtrim(cast(@max_iterations as varchar(20)))

    select  @log_size = sum( FileSize ) / 1048576.00,

            @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,

            @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00

    from    #loginfo

    select  @unused1 = @unused -- save for later

    -- changed this so that it will print with rest of output SBP

    print '

    iteration ........... = ' + cast(@iteration as varchar(10))  + '

    log size, MB ........ = ' + cast(@log_size as varchar(10)) + '

    unused log, MB ...... = ' + cast(@unused as varchar(10)) + '

    shrinkable log, MB .. = ' + cast(@shrinkable as varchar(10)) + '

    shrinkable % ........ = ' + cast(convert( decimal(6,2), @shrinkable * 100 / @log_size ) as varchar(10))

    -- original proc did not consider @target_size_MB, only @target_percent.  modified so that if @target_size_MB is

    -- entered, it will take precedence by calculating a new @target_percent. SBP

    if @target_size_MB > 0

      select @target_percent = (@target_size_MB / @log_size) * 100

    else

      select @target_size_MB = 10

    -- changed @target_percent to + 1, because many times the end result is slightly larger than the target. SBP

    while @shrinkable * 100 / @log_size > (@target_percent + 1)

      and @iteration < @max_iterations

     begin

        select  @iteration = @iteration + 1 -- this is just a precaution

        exec( 'insert table_to_force_shrink_log select name from sysobjects

               delete table_to_force_shrink_log')

        select @file = 0

        while @file < @file_max begin

            select @file = @file + 1

            select @fileid = fileid from #logfiles where id = @file

    -- added @target_size_MB as a parm in dbcc shrinkfile. also moved into @command. SBP

        select @command = 'dbcc shrinkfile( ' + @fileid + ',' + rtrim(cast(@target_size_MB as varchar(10))) + ')'

        print @command

        exec (@command)

        end

        exec( 'backup log [' + @db + '] ' + @backup_log_opt )

        truncate table #loginfo

        insert  #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, TimeorLSN ) exec ( 'dbcc loginfo' )

        select  @last_row = @@rowcount

    -- The most iterations we really need to do is the number of logical log files, and this should change if the

    -- log is shrinking.  Therefore, reset @max_iterations within the loop.  SBP

      select @max_iterations = count(*) from #loginfo

    -- If the number of logical log files did not change from last iteration, get out.  SBP

      if @max_iterations = @prev_max_iterations

       select @max_iterations = 0

      else

       select @prev_max_iterations = @max_iterations

      print 'Max iterations = ' + rtrim(cast(@max_iterations as varchar(20)))

        select  @log_size = sum( FileSize ) / 1048576.00,

                @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,

         @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00

        from    #loginfo

    print '

    iteration ........... = ' + cast(@iteration as varchar(10))  + '

    log size, MB ........ = ' + cast(@log_size as varchar(10)) + '

    unused log, MB ...... = ' + cast(@unused as varchar(10)) + '

    shrinkable log, MB .. = ' + cast(@shrinkable as varchar(10)) + '

    shrinkable % ........ = ' + cast(convert( decimal(6,2), @shrinkable * 100 / @log_size ) as varchar(10))

    end

    if @unused1 < @unused

    select  'After ' + convert( varchar, @iteration ) +

            ' iterations the unused portion of the log has grown from ' +

            convert( varchar, @unused1 ) + ' MB to ' +

            convert( varchar, @unused ) + ' MB.'

    union all

    select 'Since the remaining unused portion is larger than 10 MB,' where @unused > 10

    union all

    select 'you may try running this procedure again with a higher number of iterations.' where @unused > 10

    union all

    select 'Sometimes the log would not shrink to a size smaller than several Megabytes.' where @unused <= 10

    else

    select  'It took ' + convert( varchar, @iteration ) +

            ' iterations to shrink the unused portion of the log from ' +

            convert( varchar, @unused1 ) + ' MB to ' +

            convert( varchar, @unused ) + ' MB'

    -- cleanup

    drop table #loginfo

    drop table #logfiles

    exec( 'drop table table_to_force_shrink_log' )

    go

    if object_id( 'sp_force_shrink_log') Is Null

     select 'sp_force_shrink_log Not Created'

    else

     select 'sp_force_shrink_log Created'

    go

     

  • I've also found in the past that on some "stubborn" logfiles that wouldn't shrink I've run DBCC CHECKDB against the database. I'm not sure why but this appears to free the log space that previously I had been unable to reclaim. 

  • Thanks for all the help.  I had been looking for a checkbox on the properties page.  Of course.

    JerryO

    -gol

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply