Puzzled --log space errors in SIMPLE mode

  • I have a db table that requires selective pruning of several million rows.

    I picked a quiet time (Sunday night), backed up the db, then switched to SIMPLE and began my delete query. About a half our later I started receiving errors

    "The transaction log for database 'dbname' is full..."

    "Automatic checkpointing is disabled in database 'dbname' because the log is out of space. ..."

    and the delete query failed. I was my understanding that a SIMPLE recovery model should not grow the log file. What is happening here?

    ...

    -- FORTRAN manual for Xerox Computers --

  • Simple recovery doesn't mean the log won't grow. Simple recovery only means that inactive portions of the log are automatically marked as reusable when a checkpoint runs.

    Since you're running a single delete, the log can't be reused until the delete has finished. As such, the log needs to be large enough to handle all the log information for the delete (which will include an image of the row being deleted for use if the transaction rolls back). So the log will need to be at least as large as the amount of data you're deleting.

    You can either give the log more space or delete in small batches and run checkpoint between the delete batches to allow the log manager to mark inactive portions of the log as reusable.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • jay-h (2/10/2014)


    I was my understanding that a SIMPLE recovery model should not grow the log file. What is happening here?

    The recovery model of the database is different to the Autogrowth/Maxsize settings you configure for the database files.

    ---------------------------------------------------------

    It takes a minimal capacity for rational thought to see that the corporate 'free press' is a structurally irrational and biased, and extremely violent, system of elite propaganda.
    David Edwards - Media lens[/url]

    Society has varying and conflicting interests; what is called objectivity is the disguise of one of these interests - that of neutrality. But neutrality is a fiction in an unneutral world. There are victims, there are executioners, and there are bystanders... and the 'objectivity' of the bystander calls for inaction while other heads fall.
    Howard Zinn

  • Thanks for the insights. I now see that the massive single statement was the issue.

    ...

    -- FORTRAN manual for Xerox Computers --

  • I used Itzik Ben-Gan's solution using TOP to delete in chunks and added a truncate to the script to run every 30 iterations when I had 50 million rows to delete from a SIMPLE mode table.

    The database was backed up first so I had a recovery strategy and it is a history database that is only updated from a weekly SSIS job so no risk of incoming transactions.

    declare @count int

    set @count = 0

    while 1=1

    begin

    DELETE top(50000) FROM [xyz].[dbo].[mytable]where {conditon for deletion};

    if @@rowcount < 50000 BREAK

    set @count = @count+1

    if @count = 30

    begin

    checkpoint;

    DBCC SHRINKFILE (xyz_Log,1,TRUNCATEONLY);

    set @count = 0;

    end;

    end

    GO

  • P Jones (2/11/2014)


    DBCC SHRINKFILE (xyz_Log,1,TRUNCATEONLY);

    nooooooo.... *sob* *weep*

    Firstly, truncateOnly is not a valid option for shrinking a log. It's completely ignored when shrinking a log. It's an option only when shrinking a data file (see BoL)

    Since the truncate only is ignored, that shrinks the log to 1MB. Once the log is at 1MB, the next thing it's going to have to do is grow. Then you shrink it, then it grows, repeat, repeat, repeat.

    Just leave the log file alone. Checkpoint is sufficient in simple recovery to mark the log as reusable. If, after all the deletes have finished, the log is larger than it was when the deletes started (larger than it needs to be for regular database operations), then and only then do a shrink and shrink it to the size it needs to be for regular database operations.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply