Transaction Log Full in Simple Recovery mode

  • Wow.. I can't believe I got all the attentions from SQL Gurus! This is a great learning experience for me!

    Here are the references I have been using. Please let me know if you think some other articles will be helpful.

    Fix Suspect DB

    http://www.sqlskills.com/BLOGS/PAUL/post/TechEd-Demo-Creating-detaching-re-attaching-and-fixing-a-suspect-database.aspx

    Recreate MSDB

    http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/06/06/619304.aspx

    I appreciate all your helps! SQL is fun!

  • clare.xia (2/9/2011)


    Wow.. I can't believe I got all the attentions from SQL Gurus! This is a great learning experience for me!

    Here are the references I have been using. Please let me know if you think some other articles will be helpful.

    Fix Suspect DB

    http://www.sqlskills.com/BLOGS/PAUL/post/TechEd-Demo-Creating-detaching-re-attaching-and-fixing-a-suspect-database.aspx

    Recreate MSDB

    http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/06/06/619304.aspx

    I appreciate all your helps! SQL is fun!

    Wow talk about great spirit! 😎 Top of the class!!!

  • Run CHKDSK (no repair function) on the Disk that contains the corrupt file....

    If you receive these errors:

    /* WARNING! F parameter not specified.

    Running CHKDSK in read-only mode.

    CHKDSK is verifying files (stage 1 of 3)...

    6176 file records processed.

    File verification completed.

    133 large file records processed.

    0 bad file records processed.

    0 EA records processed.

    0 reparse records processed.

    CHKDSK is verifying indexes (stage 2 of 3)...

    13853 index entries processed.

    Index verification completed.

    Errors found. CHKDSK cannot continue in read-only mode. */

    Then you have an NTFS problem (these errors can be found on a local or SAN storage). You can try and fix the error by shutting down SQL and running chkdsk with automatic fix or allocate new Disks and restore your databases.

    Hope this helps,

    David

  • Just a thought for everyone and more of a question than a suggestion but in this situation would not be worth shrinking the log file with DBCC SHRINKFILE?

    I have no idea if this will work when the disk is corrupt but if it does re-arrange the smaller file so it doesnt occupy the corrput sector the db may come back and you may be able to back it up, fix the disk prob as others have outlined and restore it.

    As there is only simple recovery, the contents of the log file are of little practical value anyway arent they?

  • 99zardoz (10/5/2011)


    Just a thought for everyone and more of a question than a suggestion but in this situation would not be worth shrinking the log file with DBCC SHRINKFILE?

    Shrinkfile would have failed, the log header was corrupt. No amount of shrinking would help here, the problem was not that the log was too large, it was that the log was corrupt.

    As there is only simple recovery, the contents of the log file are of little practical value anyway arent they?

    Do you like your database consistent and durable or corrupt with missing data? If the latter, sure, they're of little practical use. The point of the log is to ensure durability of transactions and consistency of data no matter what. The backing up of the log for point-in-time recovery is a secondary usage.

    p.s. this issue is 8 months old and very likely resolved one way or another by now.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Shrinkfile would have failed, the log header was corrupt. No amount of shrinking would help here, the problem was not that the log was too large, it was that the log was corrupt.

    OK - my mistake - as as I said a question not a solution as such. 🙂

    Do you like your database consistent and durable or corrupt with missing data? If the latter, sure, they're of little practical use. The point of the log is to ensure durability of transactions and consistency of data no matter what. The backing up of the log for point-in-time recovery is a secondary usage.

    Given simple recovery, what can one actually do with the log file?

    p.s. this issue is 8 months old and very likely resolved one way or another by now.

    OK but people are still discussing it now and I got a link to it in my daily SQLServerCentral e-mail and the topic interested me. I'm here to learn as much as anything else so thought I'd ask a question.

  • 99zardoz (10/5/2011)


    Do you like your database consistent and durable or corrupt with missing data? If the latter, sure, they're of little practical use. The point of the log is to ensure durability of transactions and consistency of data no matter what. The backing up of the log for point-in-time recovery is a secondary usage.

    Given simple recovery, what can one actually do with the log file?

    Replication, transaction rollbacks, crash recovery. The first is a user-function, the other 2 system functions.

    The ability to undo changes (rollbacks) and ensure consistent structure and data even in the case of a server failure (crash recovery) is the primary purpose of the log. Everything else (replication, log backups, mirroring, CDC) is a secondary use.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I have what at least superfically seems similar.

    Summary: I have a simple model database which is stuck due to an overfull transaction log - can't add more log space or another log file, or do much of anything, because it is stuck with a wait on 'CHECKPOINT'.

    (Actually I have half a dozen in the same state, but I'm only concentrating on one, trying to find a solution.)

    (My only guess is that it filled the log when there was no extra disk space, and some flag got set saying it is hopeless, and now I can't get that flag unset.)

    Details:

    I have a simple model database with a full transaction log (over 100% full). I have plenty of free disk space and the log is set to autogrow by 10% - the log file is c. 800MB and I have 20G+ free, so it growing by 80M should be quite possible.

    Yet the log file won't grow, and I can't add another log file, and I can't adjust its size.

    Everything fails with an error that the transaction is full, and the wait_desc is 'CHECKPOINT'.

    DBCC OPENTRAN shows no open transactions. These are all SharePoint 2012 databases, and I've stopped & disabled the services involved (inetsvr etc) so there are no open user connections.

    DBCC LOGINFO shows that I have over 200 VLFs, all in status 2 -- so it is indeed entirely full.

    I can't execute a full backup (trying gives the same error - full transaction log due to 'CHECKPOINT')

    Any suggestions?

    (I posted a longer version of this elsewhere, but haven't received any help suggestions yet, I think.)

  • Note: SQL2012SP1. I have not applied CU2. This fix in CU2 sounds similar, but it says it is only for FULL recovery model.

    http://support.microsoft.com/kb/2509302

  • One more note. DBCC CHECKDB doesn't report any specific errors, but it cannot create a checkpoint record (because the log is out of space...), and it cannot create a snapshot (same reason), and at the bottom it actually reports that the transaction log is full due to 'DATABASE_SNAPSHOT_CREATION' *and* that it is full due to 'CHECKPOINT'

Viewing 10 posts - 46 through 54 (of 54 total)

You must be logged in to reply to this topic. Login to reply