Point-in-time restore

  • Koen Verbeeck (2/16/2011)


    Shawn Melton (2/16/2011)


    Duncan Pryde (2/16/2011)


    Although I haven't done that exam myself as I upgraded directly from 2005 to 2008 MCITP, most of the Admin exams I've done have contained a fair amount of backup and recovery questions, so it does pay to know your stuff.

    Duncan

    This would be NDA violation sharing this information would it not?...just FYI.

    Hmmm, you could derive that information also from looking at the "Skills Measured" section at the Microsoft Certification Website, as it contains relative percentages.

    That's what I thought too. If I'd mentioned specifics about individual questions that would not be good, but it was just a general observation.

    Duncan

  • Koen Verbeeck (2/16/2011)


    Shawn Melton (2/16/2011)


    I call shenanigans. How can the answer not be no? If a database is set to Bulk-Logged recovery model that means some transactions are minimally logged. That would mean the complete transaction would not exist in the database log file, how can you restore to that when SQL cannot rebuild the whole transaction?

    The correct answer stated that you could restore to a point in time if there were no bulk-logged changes.

    BOL however contradicts itself. In one page they say it isn't possible, on another they say it can be done if no bulk logged operations have occurred.

    Even the references that you gave contradict themselves.

    Essentially the Bulk Logged recovery model is the same as the Full recovery model except that some operations are minimally logged. As long as none of those operations happen, you effectively have a database in the Full recovery model.

    I did in fact perform some fairly extensive tests before posting this question, using SELECT INTO as my minimally logged operation. As long as it wasn't in the relevant backup I could restore to a point in time. I'm pasting my code below for anyone who wants to try for themselves.

    This first bit of code creates the database, a table and inserts some records. It does a bulk operation, performs a log backup, adds some more records then drops the table.

    -- Create our database

    CREATE DATABASE DisasterRecoveryDemo;

    GO

    -- Make sure we're in bulk-logged recovery mode

    ALTER DATABASE DisasterRecoveryDemo

    SET RECOVERY BULK_LOGGED;

    GO

    USE DisasterRecoveryDemo;

    GO

    -- Create a table for our test data

    CREATE TABLE ImportantData

    (

    ID int,

    EntryDateTime datetime

    );

    GO

    -- Insert some initial values

    INSERT INTO ImportantData (ID, EntryDateTime)

    VALUES(1, GETDATE()), (2,GETDATE()), (3, GETDATE());

    GO

    -- Perform a full database backup

    BACKUP DATABASE DisasterRecoveryDemo TO DISK = 'DisasterRecoveryDemo_Full.bak';

    GO

    -- Perform bulk-logged operation

    SELECT ID, EntryDateTime

    INTO ImportantData2

    FROM ImportantData;

    GO

    -- Perform a log backup. Now the just-inserted record will be backed up

    BACKUP LOG DisasterRecoveryDemo TO DISK = 'DisasterRecoveryDemo_Log.bak';

    GO

    INSERT INTO ImportantData2 (ID, EntryDateTime)

    VALUES(4, GETDATE()), (5,GETDATE()), (6, GETDATE());

    GO

    -- Make a mental note of the contents of the table

    SELECT * FROM ImportantData2;

    GO

    -- Check the current time

    SELECT GETDATE();

    GO

    -- If you wait for a bit you can do a simulated disaster, e.g.

    WAITFOR DELAY '00:00:10';

    GO

    DROP TABLE ImportantData2;

    GO

    Making a note of the time returned by the GETDATE statement, you can run the following code to get the database back to just before you dropped the table

    USE master

    GO

    -- Set the database to single user so that no other connections are allowed

    ALTER DATABASE DisasterRecoveryDemo

    SET SINGLE_USER;

    GO

    -- Back up the remainder of the log using NORECOVERY mode - called a tail log backup

    BACKUP LOG DisasterRecoveryDemo TO DISK = 'DisasterRecoveryDemo_TailLog.bak'

    WITH NORECOVERY;

    GO

    -- Begin the restore process with restoring the full backup

    RESTORE DATABASE DisasterRecoveryDemo FROM DISK = 'DisasterRecoveryDemo_Full.bak'

    WITH NORECOVERY;

    GO

    -- Next restore the first log backup

    RESTORE LOG DisasterRecoveryDemo FROM DISK = 'DisasterRecoveryDemo_Log.bak'

    WITH NORECOVERY;

    GO

    -- Finally restore the tail log backup, stopping at just before the disaster

    RESTORE LOG DisasterRecoveryDemo FROM DISK = 'DisasterRecoveryDemo_TailLog.bak'

    WITH RECOVERY, STOPAT = '<time noted earlier>';

    GO

    -- Get the database back into multi user mode

    ALTER DATABASE DisasterRecoveryDemo

    SET MULTI_USER;

    GO

    USE DisasterRecoveryDemo;

    GO

    -- Check the data is restored

    SELECT * FROM ImportantData2;

    And to clean up

    USE master

    GO

    ALTER DATABASE DisasterRecoveryDemo

    SET SINGLE_USER;

    GO

    DROP DATABASE DisasterRecoveryDemo;

    GO

    By moving the position of the SELECT INTO statement, you can see that point in time recovery is not possible if it occurs in the backup you use to restore to a point in time and you get this error:

    Msg 4341, Level 16, State 1, Line 2

    This log backup contains bulk-logged changes. It cannot be used to stop at an arbitrary point in time.

    Hope that helps,

    Duncan

  • Great, thanks for the demo code!

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • Koen Verbeeck (2/17/2011)


    Great, thanks for the demo code!

    No problem. I originally wrote it to show how if a database was using Full recovery, you could always recover to a point in time by taking a tail log backup as long as you had taken a full backup at some point. It followed a minor incident where someone deleted some data and then in the ensuing panic someone else restored the most recent full backup over the top, resulting in unrecoverable data loss! (Fortunately it wasn't terribly important data).

    I just adapted the script in order to test the scenario before writing the question.

    Duncan

  • Duncan Pryde (2/17/2011)


    Essentially the Bulk Logged recovery model is the same as the Full recovery model except that some operations are minimally logged. As long as none of those operations happen, you effectively have a database in the Full recovery model.

    Thanks for the code.

    For all purposes then you should not rely on bulk-logged recovery model if you want point-in-time restore for any transaction. Even though you can in some situations do it, you still can't do it at anytime. If you want to guarantee or greatly depend on being able to do a point-in-time recovery your best option would be to choose Full recovery model. If you don't mind loosing some data then bulk-logged could be used.

    Shawn Melton
    Twitter: @wsmelton
    Blog: wsmelton.github.com
    Github: wsmelton

  • Shawn Melton (2/17/2011)


    For all purposes then you should not rely on bulk-logged recovery model if you want point-in-time restore for any transaction. Even though you can in some situations do it, you still can't do it at anytime. If you want to guarantee or greatly depend on being able to do a point-in-time recovery your best option would be to choose Full recovery model. If you don't mind loosing some data then bulk-logged could be used.

    That's correct, yes. As I said in the question explanation, the recommended approach is to switch to bulk-logged before any bulk operations and switch back to full afterwards. The second link (http://msdn.microsoft.com/en-us/library/ms190203.aspx) explains it quite well.

    Duncan

  • Duncan Pryde (2/17/2011)


    That's correct, yes. As I said in the question explanation, the recommended approach is to switch to bulk-logged before any bulk operations and switch back to full afterwards. The second link (http://msdn.microsoft.com/en-us/library/ms190203.aspx) explains it quite well.

    Duncan

    Ah must have not paid enough attention to the explanation :hehe:

    Thanks for correction my knowledge bank on this one.

    Shawn Melton
    Twitter: @wsmelton
    Blog: wsmelton.github.com
    Github: wsmelton

  • Shawn Melton (2/16/2011)


    This would be NDA violation sharing this information would it not?...just FYI. :smooooth:

    OMG!! Run!! everbody scatter!! it's the exam po po... the Test POLICE are on to us... They are among us! They are among us!

    If only there was a place where we could go and exchange information and ideas without having to worry about these things...

  • SanDroid (2/17/2011)


    Shawn Melton (2/16/2011)


    This would be NDA violation sharing this information would it not?...just FYI. :smooooth:

    OMG!! Run!! everbody scatter!! it's the exam po po... the Test POLICE are on to us... They are among us! They are among us!

    If only there was a place where we could go and exchange information and ideas without having to worry about these things...

    He's even wearing his po po glasses...

    Great question. I had to think about it for a minute and guessed at the right answer thinking that the full and trans log backups BEFORE the bulk activity would still be valid, so why COULDN'T I do a point in time recovery prior to that bulk activity? Just made sense logically. I was unsure about AFTER, though, and assumed that since my data in the bulk activity would NOT be logged (just the bulk activity itself), then I could NOT get point in time recovery after.

    Maybe my logic is wrong but it matched the answer 🙂 I hope it matched because I understand the model...

    Peter Trast
    Microsoft Certified ...(insert many literal strings here)
    Microsoft Design Architect with Alexander Open Systems

  • Hi Folks,

    I've been thinking about this question, and looked through a fair bunch of Paul Randal's stuff around this issue and figure that the correct answer is actually missing: For me the correct answer is:

    Maybe. Mabye you can recover to point-in-time, if there are no minimally logged entries in the log, or you are covered by a differential backup or whatever, but don't be an idiot and count on it.

    Smart practice is not to see what you can get away with, but to make sure you do everything that you can not get away without.

    Paul, in responding to the thought that now you have two full backups you can delete the oldest one, said something like: How do you know the current backup isn't corrupt? If it is corrupt then you need that backup you are about to delete!

    Regards

    David

  • David Todd-242471 (2/24/2011)


    Hi Folks,

    I've been thinking about this question, and looked through a fair bunch of Paul Randal's stuff around this issue and figure that the correct answer is actually missing: For me the correct answer is:

    Maybe. Mabye you can recover to point-in-time, if there are no minimally logged entries in the log, or you are covered by a differential backup or whatever, but don't be an idiot and count on it.

    Smart practice is not to see what you can get away with, but to make sure you do everything that you can not get away without.

    Paul, in responding to the thought that now you have two full backups you can delete the oldest one, said something like: How do you know the current backup isn't corrupt? If it is corrupt then you need that backup you are about to delete!

    Regards

    David

    The answer given was effectively "maybe, if there are not minimally logged entries in the log", but to go any further than that and add caveats about viable full and differential backups being available, no necessary backups being corrupted, lost or stolen etc would have over-complicated what was meant to be a simple question!

    The point was not really to say "this is what you can get away with" but rather to say that if someone came to you in desperation with a database using the bulk-logged recovery model and wanting point-in-time recovery, you would not say "that's impossible" without checking their backups first.

    Cheers,

    Duncan

  • I didn't refer to BOL. I was confident about this since I was reading this for my 432 exam. Well, I didn't get this one. 😎 I'm going to test this. Thanks for the question.

    --------------------------------------------------------------------------------------
    Hai Ton
    My Db4Breakfast blog.

  • Patrick2525 (2/16/2011)


    Yes, it's very frustrating trying to study for the exams when the study guides are wrought with errors and contradictions. I wish those things were peer reviewed. However, these SSC QOTD, along with the peer discussions, keeps me on my toes 🙂

    I agree with you. I guess the best study guide is BOL.

    --------------------------------------------------------------------------------------
    Hai Ton
    My Db4Breakfast blog.

  • Shawn Melton (2/16/2011)


    Patrick2525 (2/16/2011)


    Yes, it's very frustrating trying to study for the exams when the study guides are wrought with errors and contradictions. I wish those things were peer reviewed. However, these SSC QOTD, along with the peer discussions, keeps me on my toes 🙂

    Most publishers put an errata page on their site that ties to the book title. O'Reilly (which controls Microsoft Press book titles now) do this. If you check the site for your book it should list exact pages of information that was found to be incorrect after publishing.

    Good point. There're errors/typos in almost every books even BOL.

    --------------------------------------------------------------------------------------
    Hai Ton
    My Db4Breakfast blog.

Viewing 14 posts - 31 through 43 (of 43 total)

You must be logged in to reply to this topic. Login to reply