Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

Merge replication - some deletes not replicated Expand / Collapse
Author
Message
Posted Thursday, December 6, 2012 2:59 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Tuesday, September 30, 2014 9:56 AM
Points: 58, Visits: 306
I'm currently testing a merge replication setup and I've discovered a problem with it. I ran a delete statement on two tables on my publisher and after running SQL Data Compare I found that some rows on both tables did not delete. There are no errors reported in the replication monitor. My question is, how can I troubleshoot further what went wrong here?
Post #1393392
Posted Thursday, December 6, 2012 4:56 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: 2 days ago @ 9:55 PM
Points: 222, Visits: 249

TEst env right? so should be easier to isolate.. delete on pub, then fire merge agent while profiler running on subscriber.

Hth,
Chris
mssqlconsulting.com



Post #1393793
Posted Friday, December 7, 2012 7:25 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Tuesday, September 30, 2014 9:56 AM
Points: 58, Visits: 306
Chris,
Thanks for the reply. I can do what you suggest and everything will work fine. My problem is I deleted over 20,000 records from two tables in the publisher. 6 records were not deleted in one table and 18 in the other. Any subsequent transactions have been fine. I'm trying to find out why these 24 records were not deleted as they should have been.

It was a pretty simple test - I loaded a bunch of data on my publisher, watched it replicate in the monitor and ran the Redgate compare tool to check that it worked. Everything was fine at this point. I then deleted the test data from the publisher, ran the compare tool again and it showed up these 24 records in the subscriber.

I should point out also that both the publisher and the subscriber were connected to the network during this test. The subscriber however is in a different site, over 100 miles away. I'm beginning to wonder did something happen on the network - dropped packets maybe??
Either way, I can't really conclude that merge replication is a reliable approach for my solution after this test.
Post #1394058
Posted Friday, December 7, 2012 12:29 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: 2 days ago @ 9:55 PM
Points: 222, Visits: 249
Foreign keys can cause issues as dml can be delivered out of order, ie parent deleted before children. Make sure you have NOT FOR REPLICATION on fk constraints, we remove alltogether.

Also, 20k deletes is more batch processing, merge over a wan will have issues w/ such large dml requests. Better suited for transactional. So run your tests again and split up the dml. You will need to communicate this to your developers too, merge works well in oltp, bad in bulk load environments.

With that being said, merge is stable, reliable, you get perfect execution, and have nice gui & options for conflict resolution. The only negative is an addl rowguid column for each article.

Peer to peer is good if you need active/active, however is expensive requiring Ent Ed. Is basically bi-directional transactional replication, is fast & reliable and doesn't add addl schema. And I believe in 2012 a conflict viewer feature was added.



Post #1394201
Posted Monday, December 10, 2012 2:15 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Tuesday, September 30, 2014 9:56 AM
Points: 58, Visits: 306
Chris,
Some great info there, thank you. The 20k deletes was just to remove the test data, its not something that would occur regularly in the live environment. The normal workload is a steady stream of inserts and a small amount of updates. Deletes will be rare.

Peer-to-peer sounds like it would be the best fit for what I need but unfortunately I'm stuck with 2005 Standard for this project. The basic requirement is to have a secondary copy of the database that is in a ready state so that the application stays up if the WAN link fails over to the DR site.

If I create the FK constraints with NOT FOR REPLICATION, should I create them manually at the subscriber? Or does that defeat the purpose?
Post #1394493
Posted Monday, December 10, 2012 5:37 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: 2 days ago @ 9:55 PM
Points: 222, Visits: 249
Create at both pub & sub. I personally do not like repl as a DR tool, esp merge, as this is slow over a wan link and not scaleable. Also, you are only protected down to tables & data.

I prefer log shipping, you get a perfect copy of entire db, latency is a bit slower however when you failover just bump up your identities to allow for data to be recovered.

If you are sticking with replication, transactional is not a bad idea, here you don't have to add an extra column for every article you want at dr site. Latency and performance is great over a wan. And when you failover, just setup trans pubs to replicate back to your primary site. But again, only tables and data, use red-gate sql compare to keep everything else up to date wheter you use merge or trans.

Good luck ; at least you are setting up somehting remote, most don't even think about...



Post #1394562
Posted Tuesday, December 11, 2012 9:54 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Tuesday, September 30, 2014 9:56 AM
Points: 58, Visits: 306
I actually started with transactional but then I got worried about identity ranges so I switched to merge instead to utilize the automatic identity range management. Log shipping is more familiar territory for me, but the big requirement on this project is that the application can switch to the DR database without any manual intervention.

I've got a bit more time for experimentation so I might try some other options. Thanks very much for your comments Chris.
Post #1395216
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse