An examination of bulk-logged recovery model

  • Please note that Eager Writes are just for minimally logged operations in bulk-logged or simple recovery, and they just write the minimally logged pages to the data file.

    Understood. Mulling it over, this seems to make sense to me as a quick and dirty mental model for bulk logged operations:

    We always need "before" and "after" images in user database to support recovery. For minimally logged operations in bulk-logged or simple recovery, the "after" image is only in the data file, hence the absolute need for Eager Write. The "before" image is assumed to consist of empty pages (allocation updates aside), hence the restriction on what can and cannot be minimally logged. So rollforward is based on the knowledge that the data is in the data file, and rollback is only possible because we know we are rolling back to empty pages.

  • great article!

  • Thanks Gail, great article and explanations. Clear and concise. Appreciate you clearing up that question for me.

  • Hi Gail,

    Thanks for the great article..

    One question:

    One side effect of this requirement that both log records and modified data pages are written to disk before the transaction commits is that it may actually result in the minimally logged operation being slower than a regular transaction if the data file is not able to handle the large volume of writes

    I fail to fully understand why minimally logged operations can be slower . With the eager write process, should it not ease the burden on the transaction log by writing the dirty minimally logged records only to the data file? By splitting the writes as opposed to the fully logged model should it not be faster, especially if the log and data are on separate drives? Is flushing to the data files slower than writing to the log if both are sequential? I can understand it if the writes are random, but are minimal logged operations not all sequential in nature when writing to the data files?

    Thanks for this..

  • Hi Very nice Article.

    Since i am a beginner, i had a question in my mind:

    When we switch the database recovery mode, from Full recivery to bulk- logged or vice versa, does some changes happen to the existing log file of the database too??? (so as to align with the database recovery model).

  • Great reference article.

    M&M

  • stevro (5/13/2012)


    I fail to fully understand why minimally logged operations can be slower . With the eager write process, should it not ease the burden on the transaction log by writing the dirty minimally logged records only to the data file? By splitting the writes as opposed to the fully logged model should it not be faster, especially if the log and data are on separate drives? Is flushing to the data files slower than writing to the log if both are sequential? I can understand it if the writes are random, but are minimal logged operations not all sequential in nature when writing to the data files?

    With a fully logged operation only the log records have to be written to disk before the transaction is complete, the data pages can follow later.

    If you've got a database where the data file is optimised for reading (say raid 5 and read caches) and a transaction log optimised for writing (say raid 10 and write caches), then it is possible that writing X GB of data to the data file will be slower than writing X GB of data to the log file would have been, and the transaction commit will have to wait for the write to the data to the data file to finish where normally it only has to wait for the write to the log.

    I didn't say it will be slower, but it can be, maybe not usually, but it is a possibility.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • deepak_19888 (5/13/2012)


    When we switch the database recovery mode, from Full recivery to bulk- logged or vice versa, does some changes happen to the existing log file of the database too??? (so as to align with the database recovery model).

    The change is logged, just like any other change to the database, but there's nothing that you would need to do other than setting the recovery model option.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • If you've got a database where the data file is optimised for reading (say raid 5 and read caches) and a transaction log optimised for writing (say raid 10 and write caches), then it is possible that writing X GB of data to the data file will be slower than writing X GB of data to the log file would have been

    Thanks Gail. This make sense.

  • Truly briefcase item for me.

    The complex topic in such a simple way( at least for me)

    Thanks Gail

    Should we live our life in bulk logged or full recovery mode?:hehe:

    ----------
    Ashish

  • crazy4sql (5/24/2012)


    Should we live our life in bulk logged or full recovery mode?:hehe:

    From the article:

    Guidelines for Bulk-logged Recovery

    The main guideline for bulk-logged recovery is: ‘as short a time as possible’.

    Due to the implications for point-in-time restores and restoring to point of failure, if a database is switched to bulk-logged recovery in order to minimally log some operations, it should stay in bulk-logged recovery for the shortest time possible.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank you!

    I was redoing everything you have done in this example but when I restore the log with CONTINUE_AFTER_ERROR I get all my data correctly. I have done this multiple times and I haven't got the message 'Attempt to fetch logical page in database failed'

    I cannot understand why this is happening because when I did the log backup(tail) I already have deleted the mdf file , so there isn't any way SQL Server could get the data from.

    Can you please help me?

    Thank you

  • Without all your code, I can't say. Maybe the operations weren't minimally logged (existing table, full recovery or any of the other reasons for not minimally logging)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank you for your reply. As soon as i get to my office I will get all the code I have used and i will post here.

    GilaMonster (1/14/2013)


    Without all your code, I can't say. Maybe the operations weren't minimally logged (existing table, full recovery or any of the other reasons for not minimally logging)

  • Just go over it a couple times, make sure you are getting minimal logging. I ran that particular test a number of times, as did a friend and, while we got a variety of different errors, we did not get the table back intact after any of the tests.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 15 posts - 16 through 30 (of 31 total)

You must be logged in to reply to this topic. Login to reply