How to Setup Log Shipping for Multiple Databases

  • Comments posted to this topic are about the item How to Setup Log Shipping for Multiple Databases

  • Hi Brian,

    This is quite interesting while reading :-), need to implement it on my test env before I can comment anything....but its sure lot of efforts had been put by you to achive this..thanks for sharing the idea and the implementation method.

    EDIT:

    Needless to say this has gone into my briefcase 😉

  • Good article. I have one comment/question. It looks like this solution can only handle databases with one data file. How would handle a database with multiple data files?

  • Jack,

    I haven't worked that out yet. I'm fortunate to not have to deal with that issue yet. I did find an article that looks like it might provide a solution or at least point you in the right direction.

    Article

    http://www.sqlbackuprestore.com/backuprestorehistorytables.htm

    Thanks

  • Thanks jshailendra.

    I have been running with this in production for about 6 months now without issue.

  • How's your OS memory usage on the two servers? Spawning 130 cmdshell processes on both ends seems excessive.

  • Hi Brian,

    Just curious- if all 130 DBs reside on the same server why you did not consider using clustering?

    Thanks, Yuri

  • bret.lowery,

    Memory and CPU usage is much better than it was when I had all 130 DBs running from separate jobs. That's what I like about this solution, only 1 execution is running at a time.

    Thanks

  • Yuri55,

    I am using clustering, 1 active, 1 passive. The purpose of log shipping for me is to have a secondary location that the data exists at in case the cluster fails. I also use this as a location where our account managers can gain read only access to the data.

    Thanks

  • Make sense.

    I have though 1 more question-

    your backup job runs every 15 min (on primary server) and deals with 130 DBs,

    i.e. avg backup time should be around 7 sec (if my calculation is correct).

    That means your solution works for small (size) DBs only.

    Or am I missing something? Thanks

  • Yuri55,

    Yes, some of my databases are on the smaller side but the solution is still sound for any size DBs.

    We are backing up transaction logs and it is the volume of modifications made to the data that determines the size of the transaction logs and thus the time it takes to generate the transaction log backup.

    As you have alluded to, you need to determine your average transaction log backup time across all of your DBs to determine what interval you should use for the backup, copy, and restore jobs.

    Excellent question!

    Thanks

  • Yes, you are right- one needs simply to adjust backup interval depending on DB size. Thanks

  • Yuri55,

    But it's not database size that determines the interval, it's the transaction log size.

    Thanks

  • Very interesting article, Brian. I will add this one to my briefcase, too.

  • Jon,

    Thanks!

Viewing 15 posts - 1 through 15 (of 42 total)

You must be logged in to reply to this topic. Login to reply