How to Setup Log Shipping for Multiple Databases

  • Could you go into detail, explain or give an example of

    "a common physical location"

  • Hi reggie burns-317942,

    Based on your error message I would guess that the path to the backup is empty. You should try to do a print statement on each of the variables in the script to ensure that they are picking up something.

    Also, you should note that the scripts I provided do not create the backup on the primary server for you. I expected that you already have a backup routine in place and just need to grab the latest backup from your backup set.

    By "physical location" I simply mean a drive or storage device that both instances have permissions to.

    It is not necessary for you to map a drive as you can use the full UNC as well i.e. \\Server\drive$\Folder\DB.bak. You can also create a shared folder so that you don't need to deal with administrative shares i.e. \\Server\Folder\DB.bak instead of \\Server\drive$\Folder\DB.bak.

  • reggie burns-317942,

    Here is a script I have that will generate the commands for creating backups of your databases that are participating in log shipping and are out of sync with the primary database.

    -- RUN THIS TO GENERATE BACKUPS FOR OUT OF SYNC DBs --

    SET NOCOUNT ON;

    SELECT '-- ' + secondary_database + '

    BACKUP DATABASE [' + secondary_database + '] TO DISK = N''\\<BackupServerPath,,MyServer>\' + secondary_database + '\' + secondary_database + '.bak'' WITH NOFORMAT, INIT, NAME = N''' + secondary_database + '-Full Database Backup'', SKIP, NOREWIND, NOUNLOAD, STATS = 10

    GO

    '

    FROM msdb.dbo.log_shipping_monitor_secondary

    WHERE DATEDIFF(mi,last_restored_date,GETDATE()) > restore_threshold

    ORDER BY secondary_database

    Run this script to TEXT on the secondary instance. Then paste this into a new query window and run it on the primary server. You will of course need to fix the backup path to match your environment.

  • Thanks for this script too., but once the dbs that are were backuped using the output of the script and then restored on the secondary instance the log_shipping_monitor_secondary table still shows them out of sync dbs?

  • reggie burns-317942,

    Yes, that's correct the databases will still be out of sync.

    The last step to get your DBs in sync is to manually copy over all the transaction log backups to the same destination location you specified in the LSCopy_ALL_DBs job. You only need to copy the ones that occurred after the last backup was taken but to be safe you can copy them all.

  • I implemented one more improvement, preventing the many many log messages about the backup on the primary server. At the beginning of script LS_Backup_ALL_DBs write:

    DBCC TRACEON (3226, -1)

    At the end write

    DBCC TRACEOFF (3226, -1)

    Note that instead of the backup success message (one for each database), you now get a total of two messages, telling about the changed trace flags.

  • Hi.

    We will be setting up transaction log shipping for about 20 databases and having them all in one job sounds like a good idea. There where some parts that left me confused, so I have not been able to get this working.

    First, none of the LS jobs would run because the xp_cmdshell option was not enabled. That should be included, the error was pretty easy to detect and resolve though.

    I don't get the script that you have to backup and restore. I filled in the location but the variables are not defined and so it does not run. If I am declaring variables wouldn't I have to do that for every database and I don't see how this would save time. Also is this supposed to run separately on the secondary or added to the script that is generated when using the log shipping setup GUI?

    Is there a way to use the for each command that I have heard about to go through all the databases to restore them in a standby mode? I have more questions but I don't know if you have moved on so I won't make it any longer.

    In the end we are hoping to find a way to take 20 or so databases and set them up for log shipping without going through a wizard 20 times.

  • Hi Imacdonald,

    Yes I agree, including that you needed to enable xp_cmdshell would have been helpful.

    For all 3 scripts, backup, copy, and restore, you need to REPLACE "SQL_Primary_Server" and "SQL_Secondary_Server" with the name of your primary and secondary servers. For example, if my primary server is called "SQL_Prod1" I would replace "SQL_Primary_Server" with "SQL_Prod1" like this:

    DECLARE @CMD varchar(4000)

    SET @CMD = '"D:\Program Files\Microsoft SQL Server\100\Tools\Binn\sqllogship.exe" -Backup ' + CAST(@ID AS varchar(50)) + ' -server SQL_Prod1'

    EXEC xp_cmdshell @CMD

    FETCH NEXT FROM cursor_log INTO @ID;

    Each of these scripts contains a cursor that loops through each database. So there is no need to declare variables for each DB or run the script multiple times. You should probably familiarize yourself with cursors. There are some good articles on it on BOL.

    You will need to create a job for each of the backup, restore, and copy scripts. The backup job that you create runs on the primary server, the restore and copy jobs run on the secondary server. The names I have chosen for the jobs are LSBackup_ALL_DBs, LSCopy_ALL_DBs, and LSRestore_ALL_DBs. In the article I list that the backup job should run every 15 minutes, the restore job should run every 10 minutes, and the copy job should run every 5 minutes. Both the name of the job and the interval at which they run at are at your discretion.

    In the article I talk about using the wizard to create the initial script that you will run against each database. In the article I mention that you will want to setup the jobs that the wizard creates as disabled. Once you have finished running the other scripts you can delete all the log shipping jobs that the wizard created.

    Of course, you should get familiar with the process in a Dev environment and only when you are comfortable with how this works should you run it in your production environment, with the help of a qualified DBA if possible.

    As for using the sp_msforeachdb. It's an undocumented SP and as such could be dropped by Microsoft at any time. However, I have been told there may be some value in this. I have not tried it but here is a basic article by Tim Chapman that explains how to use it and other undocumented SPs.

    Let me know if I missed any of your questions or you have other questions about this log shipping article.

    Thanks

    Brian Davey

  • Thank you for the response.

    So no worries about screwing anything up because I am doing the testing on sample databases located on my local machine (lmacdonald), over two different instances. So I am putting in lmacdonald\instance_1 and lmacdonald\instance_2 for the primary and secondary names.

    So caught up to the point where I have saved the script to create the transaction log shipping, but I have not ran it yet. Do I first need to go run a restore on the secondary of the database that I will be using for transaction log shipping?

    After that, do I then run the script that was created to start the transaction log shipping?

    My apologies, I didn't scroll up enough when I copied the restore script and I missed some code. So I am actually done for the today but I'll try it again, tomorrow. Thanks for getting back to me.

  • Hi Imacdonald,

    First, you should create the 3 jobs (backup on primary, restore & copy on secondary).

    Then you should setup the first database for log shipping using the wizard and then script out what the log shipping wizard created. Be sure to set all the log shipping jobs in the wizard to disabled.

    Then you should restore backups of all your databases to your secondary server. They should be restored in "standby mode".

    Then you should run the script from the log shipping wizard for each database. Be sure that you change the name of the database being called in the script to the current database you are setting up.

    Then you should delete all the jobs that the script from the log shipping wizard created on the primary and secondary servers.

    Finally, you will want to copy over all the transaction logs to the location you chose for the restore job to pick up transaction logs from.

    NOTES:

    You will need a physical folder location that both the primary and secondary servers can reach. The primary server will need to write transaction logs to this location and the secondary server will need to copy those transaction logs over to where the restore job can restore them from.

    Be sure that alerting is setup and working and you are paying attention to it. Log shipping alert jobs should be automatically created as part of the wizard setup. Be sure that you are receiving those alerts. The reason I bring this up is it is not uncommon in my experience for log shipping to get out of sync do to a bad or missing transaction log.

  • Great article thanks. I am wondering if your solution would work for 2-3 thousand databases? My challenge is that I have programs that are creating new DB's daily and I need to get them to my DR location. Based on the totals I know mirroring is out so I am looking to transaction logs to provide my DR solution.

    Ideally I would like to use your script solution and some how batch it to pick up new DB's that are added several times again and configure the transaction logging.

    Thanks in advance!

  • Hi clappin,

    First, thanks for commenting on my article.

    I do not see anything inherent in the scripts or in how log shipping works that would prevent you from doing this on thousands of databases. However, you may run into issues simply because of the size of the task you are trying to accomplish. The amount of time needed to run 2 thousand transaction log backups in a single job could be considerable. It might be worth while creating multiple jobs to handle this work and use an algorithm based on the DB ID to determine which job it is assigned to, i.e setup 100 jobs and the new database would go to job name = 'LS_ALL_BKUP_' + CAST(DB_ID() % 100 AS nvarchar(200)).

    I expect that in order to fully automate this you will need a job that will scan sys.databases for new databases not already participating in log shipping. This job would need to also take a full backup of the new database and save it to a location where the secondary can reach. And then maybe have a job on the secondary that looks for backups in that locations and restores them in standby mode. And then the jobs could run the scripts in my article to get log shipping setup.

    I'm sure I missed a bunch of stuff. It's a tall order but seems doable.

    It sounds like you may also need a process for archiving or removing databases no longer in use. I'm guessing these are "one shot" kinds of databases and don't remain active for very long. Archiving DBs could really help extend the life of this instance.

    Thanks

  • Wow BDavey,

    I can't believe that your article is so old and still very usefull. Thanks!

    Are there any SQL Server improvements (now that we have SQL2022) that can further improve this log shipping strategy?

    One of the things I was wondering about is all the Sql Agent jobs that need to be disabled. Sure we can create a script for this but wouldn't it be better to create a script that removes them all?

    Leaves me wondering why there is no way to use sp_add_log_shipping_primary_database without creating the jobs...

    Thanks again!

    Wilco

     

Viewing 13 posts - 31 through 42 (of 42 total)

You must be logged in to reply to this topic. Login to reply