Backing up a replicated database

  • My site has a SQL Server 2000 Publication that is replicated using both Merge and Transactional subscriptions to SQL Server 7 sites. My problem is that I have attempted to use a technique for backing up the publishing database log recommended in the book by Microsoft Press called "Deploying Microsoft SQL Server 7 Notes from the Field". In this technique the recommendation is to stop the log reader, backup the transaction log for the publisher, then backup the distribution database, and finally start the log reader again. This is supposed to ensure that when you need to restore after losing the database, you will have the correct allignment of pointers in the Distribution database, that point to transactions in the log.

    This does not seem to work. If I stop and start the log reader, the next transactional push appears to try to push the same data twice, which causes primary key violations everywhere.

    Has anyone else experienced this problem?

  • I've been lucky enough not to have to deal with it. Stopping the log reader makes sense, sorta, it cant post transactions into distribution. Stopping/starting the logreader should have no impact otherwise that I know of, I do it all the time.

    The reason I avoid it is if I have to do a restore, I just plan to do a snapshot. Good enough on a fast LAN with the largest db being 15g, most far less.

    Also lets me avoid the complications arising from having multiple db's replicating. Restoring distribibution to fix one would break the others.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • Thanks for your super prompt response Andy.

    I was definitely suprised to find my replication agents experiencing the sort of corruption I described, and I am quite sure it was the script I was running to perform the backups that was causing the problem, because I removed the start and stop log steps and everything came right. Perhaps it is something to do with the mix of SQL Server 2000 and 7.0, I don't know.

    I do have one other piece of information that might help explain why I was trying to do this. My databases are widely dispersed and just one table takes in excess of 14 hours to replicate the snapshot! I really have to find a way to ensure that I can restore all copies if I have a failure at the publisher, because I can't afford to send a new snapshot, and I can't log ship (at least not untill I have a complete SQL Server 2000 network, which could be years away). That only leaves me with shipping the bcp files manually via ftp (which is a whole lot faster but far more tedious and prone to errors).

    I appreciate your comments about multiple replication topologies, it sure adds to the complexity.

    RossL

    quote:


    I've been lucky enough not to have to deal with it. Stopping the log reader makes sense, sorta, it cant post transactions into distribution. Stopping/starting the logreader should have no impact otherwise that I know of, I do it all the time.

    The reason I avoid it is if I have to do a restore, I just plan to do a snapshot. Good enough on a fast LAN with the largest db being 15g, most far less.

    Also lets me avoid the complications arising from having multiple db's replicating. Restoring distribibution to fix one would break the others.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/


  • Couple things you might consider. Are you compressing before FTP'ing? Or take a look at SQLLiteSpeed for compressed backups. No reason to wait for log shipping, nothing in SQL2K you have to have to do it, you can write the code yourself. Downside is that the db is normally in standby so you cant use it.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply