Generate script to restore log backups

  • Hi,

    I have more than 2000 log backup files in Z:\Backups\t-logs folder to restore after restoring the full database backup on a DR server.

    Can you please tell me how to generate a restore script to restore the logs?

  • I haven't tried this but a simple googling will get you the answer.

    http://www.mssqltips.com/sqlservertip/1584/auto-generate-sql-server-restore-script-from-backup-files-in-a-directory/

  • I tried that but its not working.

    I'm using Ola Hallengren's backup script which will put full, diff and log backups in different directories.

  • You will need to modify the attached script and put in the Database names, logical and physical file paths in the ############################ areas and then stick it in the databases folder at the same level of the FULL / DIFF / LOG folders then run the vbs file and it will generate you a script restoring FULL/DIFF/LOGS etc

  • Do you have access to the server on which the backups were made? If so, you can generate a script by querying the backupset tables in msdb. Alternatively (and go very careful here if it's a production server), make as if you are going to restore the database on the same server, but then use the Generate Script button without actually doing the restore. Modify the script, and run it on the new server.

    John

  • I have this old bit of js which may be of use:

    var fso = WScript.CreateObject("Scripting.FileSystemObject");

    var fldr = fso.GetFolder(".");

    var fenum = new Enumerator(fldr.files);

    var File_Array = new Array();

    var i = 0;

    while (!fenum.atEnd())

    {

    if

    (

    (fenum.item().Name).toUpperCase().search(/^YourDB/) != -1

    &&

    (fenum.item().Name).toUpperCase().search(/TRN$/) != -1

    )

    {

    File_Array = (fenum.item().Path);

    }

    fenum.moveNext();

    }

    if (File_Array.length > 0)

    {

    try

    {

    var list = fso.createTextFile(".\\RestoreLog.sql", true);

    File_Array.sort();

    for (i = 0; i < File_Array.length; i++)

    {

    list.writeLine("RESTORE LOG agresso FROM DISK = '" + File_Array + "' WITH NORECOVERY;");

    list.writeBlankLines(1);

    }

    }

    catch(e)

    {

    throw(e);

    }

    finally

    {

    list.close();

    }

    }

    WScript.Echo("RestoreLog.sql has been generated.");

  • Mani-584606 (7/18/2014)


    I tried that but its not working.

    I'm using Ola Hallengren's backup script which will put full, diff and log backups in different directories.

    So, maybe, copy the DIFs and the TRNs to the same directory as the BAK file???? Or, maybe, modify Greg's script (the one that you said isn't working) to simply read from all the different directories??? That would seem to be an easy thing to do.

    Also, it would be a good thing if you modified Ola's script to put all the backup file types in the same directory in the future so that you wouldn't have to go through this mess.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • As a bit of a sidebar, if the MSDB database is still intact, you can use the GUI to do file level restores (it's pretty smart and will even handle multiple files and multiple directories) or use it to generate the restore script. It's also not going to care about any naming conventions and you're guaranteed to get the right files because it checks the LSNs to make sure that the correct files are being used and in the correct order. Generating the script is particularly useful when trying to do the restore to a different box.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (7/21/2014)


    As a bit of a sidebar, if the MSDB database is still intact, you can use the GUI to do file level restores (it's pretty smart and will even handle multiple files and multiple directories) or use it to generate the restore script. It's also not going to care about any naming conventions and you're guaranteed to get the right files because it checks the LSNs to make sure that the correct files are being used and in the correct order. Generating the script is particularly useful when trying to do the restore to a different box.

    I agree with that except in the rare times when somebody has altered the data in msdb. That is the method I use to generate my restore scripts.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • I am a bit surprised this hasn't been asked yet and it seems REALLY important.

    Why do you have 2000 log backups to restore?

    If you truly have that many log backups to restore, then it seems like it is time to correct your DR scenario.

    That many log backups heightens your risk to error in the event of a disaster.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • SQLRNNR (7/21/2014)


    I am a bit surprised this hasn't been asked yet and it seems REALLY important.

    Why do you have 2000 log backups to restore?

    If you truly have that many log backups to restore, then it seems like it is time to correct your DR scenario.

    That many log backups heightens your risk to error in the event of a disaster.

    +1

  • Jack Corbett (7/21/2014)


    SQLRNNR (7/21/2014)


    I am a bit surprised this hasn't been asked yet and it seems REALLY important.

    Why do you have 2000 log backups to restore?

    If you truly have that many log backups to restore, then it seems like it is time to correct your DR scenario.

    That many log backups heightens your risk to error in the event of a disaster.

    +1

    I agree but I also have to say that "It Depends". For example, 2000 log file backups taken at 5 minute intervals covers 10,000 MINUTES. Divide that by 60 and you end up with 166.7 hours, which is just shy of 1 week. On larger databases, it's not uncommon to do a once per week full backup with a week long log file chain. That would allow someone to keep two weeks of backups in an online fashion using only two copies of the large database.

    Heh... of course, if the log backups are taken once an hour, then there's a real live problem with this scenario.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (7/21/2014)


    Jack Corbett (7/21/2014)


    SQLRNNR (7/21/2014)


    I am a bit surprised this hasn't been asked yet and it seems REALLY important.

    Why do you have 2000 log backups to restore?

    If you truly have that many log backups to restore, then it seems like it is time to correct your DR scenario.

    That many log backups heightens your risk to error in the event of a disaster.

    +1

    I agree but I also have to say that "It Depends". For example, 2000 log file backups taken at 5 minute intervals covers 10,000 MINUTES. Divide that by 60 and you end up with 166.7 hours, which is just shy of 1 week. On larger databases, it's not uncommon to do a once per week full backup with a week long log file chain. That would allow someone to keep two weeks of backups in an online fashion using only two copies of the large database.

    Heh... of course, if the log backups are taken once an hour, then there's a real live problem with this scenario.

    I've worked on a lot of those larger databases. Even with once a week full backup, why not do a daily diff?

    Granted you still keep the weeks worth of tlog backups around, but this allows you multiple recovery paths and reduces the amount of risk.

    I would much rather have multiple avenues of recovery in a disaster. And in the event that everything works perfect to recover, then I just need 1 full, 1 diff and any tlogs after that diff. 2001 files to locate and restore, or a maximum of 290 files to locate and restore (1 full, 1 diff and 288 tlogs for 5 minute intervals over the course of 24hrs). I like the fewer files route.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • SQLRNNR (7/21/2014)


    Jeff Moden (7/21/2014)


    Jack Corbett (7/21/2014)


    SQLRNNR (7/21/2014)


    I am a bit surprised this hasn't been asked yet and it seems REALLY important.

    Why do you have 2000 log backups to restore?

    If you truly have that many log backups to restore, then it seems like it is time to correct your DR scenario.

    That many log backups heightens your risk to error in the event of a disaster.

    +1

    I agree but I also have to say that "It Depends". For example, 2000 log file backups taken at 5 minute intervals covers 10,000 MINUTES. Divide that by 60 and you end up with 166.7 hours, which is just shy of 1 week. On larger databases, it's not uncommon to do a once per week full backup with a week long log file chain. That would allow someone to keep two weeks of backups in an online fashion using only two copies of the large database.

    Heh... of course, if the log backups are taken once an hour, then there's a real live problem with this scenario.

    I've worked on a lot of those larger databases. Even with once a week full backup, why not do a daily diff?

    Granted you still keep the weeks worth of tlog backups around, but this allows you multiple recovery paths and reduces the amount of risk.

    I would much rather have multiple avenues of recovery in a disaster. And in the event that everything works perfect to recover, then I just need 1 full, 1 diff and any tlogs after that diff. 2001 files to locate and restore, or a maximum of 290 files to locate and restore (1 full, 1 diff and 288 tlogs for 5 minute intervals over the course of 24hrs). I like the fewer files route.

    Agree with Jason on this as well. If I'm only doing weekly fulls as the least I'm doing a differential every couple of days so I don't have to do 2000 restores, even when it is "automated" that is ripe for disaster. I'd probably even do diff's every 12 hours if I'm doing 5 minute t-log.

  • SQLRNNR (7/21/2014)


    Jeff Moden (7/21/2014)


    Jack Corbett (7/21/2014)


    SQLRNNR (7/21/2014)


    I am a bit surprised this hasn't been asked yet and it seems REALLY important.

    Why do you have 2000 log backups to restore?

    If you truly have that many log backups to restore, then it seems like it is time to correct your DR scenario.

    That many log backups heightens your risk to error in the event of a disaster.

    +1

    I agree but I also have to say that "It Depends". For example, 2000 log file backups taken at 5 minute intervals covers 10,000 MINUTES. Divide that by 60 and you end up with 166.7 hours, which is just shy of 1 week. On larger databases, it's not uncommon to do a once per week full backup with a week long log file chain. That would allow someone to keep two weeks of backups in an online fashion using only two copies of the large database.

    Heh... of course, if the log backups are taken once an hour, then there's a real live problem with this scenario.

    I've worked on a lot of those larger databases. Even with once a week full backup, why not do a daily diff?

    Granted you still keep the weeks worth of tlog backups around, but this allows you multiple recovery paths and reduces the amount of risk.

    I would much rather have multiple avenues of recovery in a disaster. And in the event that everything works perfect to recover, then I just need 1 full, 1 diff and any tlogs after that diff. 2001 files to locate and restore, or a maximum of 290 files to locate and restore (1 full, 1 diff and 288 tlogs for 5 minute intervals over the course of 24hrs). I like the fewer files route.

    Yep... I absolutely agree but it still "Depends". Although hard disk space is "cheap", it's not always available. Doing the DIFs is essentially a duplication of everything in the log files since the last BAK file. Some folks just don't have the space for that.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply