Multiple publications or just one?

  • I have a replication strategiy question: is it better to have multiple publications from one databse to a second, or to combine all of them into a single publication? I can see arguments for both. On the one hand, multiple gives granularity and will not kill all replication if there is a failure. However, there is probably increased overhead, it's more difficult to maintain, and it can be a realy problem for replication of referential integrity. The single publication would be easier to maintain, but it may be more intensive for the server and not be as fault tollerant. We have been using multiple publications but I'm now leaning toward a single. I mostly want to do it for maintaining the referential integrity.

    What do you all think? What's the better strategy?

  • I am currently working on a largish (288 tables) transactional replication. Currently setup as one publication. We are reviewing all tables firstly to see if they need to be replicated or not. Then we are dividing the tables into functional areas and then having a publication for each. We have also thought of having one publication for tables that are static or change very little and then multiple publications for more dynamic data.

    You need to consider how much data would be pushed if you needed to resync and how long it would take (and how fast are the links to teh subscribers) and  also how many subscribers.

    Once you start getting large databses with lots of subscribers you are usually better off having multiple publications - IMO.

  • There is no difference between having one publication or multiple publications - there is no extra over head, no advantages for replicating static tables in one publications, volatile tables in a second.

    However there are some advantages to splitting your publication into two or more publications.

    The first is that with the independent agent option you can have multiple data streams going to your subscription database.

    The second is that if you are deploying very large tables you might want to break these into seperate publication so

    1) you get the parrallel streams of data sent to the subscriber when you are building your snapshot

    2) if your snapshot deployment fails and you have lets say 6 large tables in a single publication, you wil have to resnapshot and resend all 6 large tables. By breaking this up and one of the large tables fail, you will only have to send this large table again.

     

     

    --

    Hilary Cotter

    Looking for a SQL Server replication book?

    http://www.nwsu.com/0974973602.html

    Looking for a FAQ on Indexing Services/SQL FTS

    http://www.indexserverfaq.com

  • There is another thing to consider when you use multiple (independent) agents. You can select not to run them all continuosly and schedule them as appropriate for each of the publications which will minimize the memory consumption on your server

    hth


    * Noel

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply