peer to peer replication PK violation issue.

  • I have configured peer to peer replication for two nodes.

    i have initialised the subscription from backup.

    Both the nodes are sync after completion of the setup but when i inserting the records ..I am getting

    below error/.. and the table pk column is having not for replication value set to 'Yes' in table design.

    how can i handle this error? please suggest on this

    Violation of PRIMARY KEY constraint 'PK_table1'. Cannot insert duplicate key in object 'dbo.table1'. The duplicate key value is (11).

  • dastagiri16 (6/20/2016)


    I have configured peer to peer replication for two nodes.

    i have initialised the subscription from backup.

    Both the nodes are sync after completion of the setup but when i inserting the records ..I am getting

    below error/.. and the table pk column is having not for replication value set to 'Yes' in table design.

    how can i handle this error? please suggest on this

    Violation of PRIMARY KEY constraint 'PK_table1'. Cannot insert duplicate key in object 'dbo.table1'. The duplicate key value is (11).

    if you are doing merge replication, shouldn't one of the tables have the identity seeded at a billion, so there's no overlap?Server A starts at 1

    server B starts at 1,000,000,000, so there's no accidental duplicates of identity, right? or change the steps to be negative, so all negative numbers are SERVER B?

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • That can happen if there are inserts into the same table on both nodes at the same time. There's a latency on the replication.

    This is why the design guidance for peer-to-peer replication recommends making sure that the same tables aren't modified on multiple nodes.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • i believe peerto peer reications support two node modification...?

    When iinsert row in node1 the auto increment value is

    48..and same 48 id value has been inserted in node2...

    Then i tried to insert a record in node2..it is giving pk violation error because already an 48 entry was tthere in node2.again i insert a record on node2..that make an entry as 49..

    Any other options to overcome this

  • dastagiri16 (6/21/2016)


    i believe peerto peer reications support two node modification...?

    Peer-to-peer replication supports multi-node modifications, but the design guidelines recommend localisation of data changes (localise changes to tables to specific nodes), so that you don't get insert and update conflicts, like you're getting.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • You'll need to ensure that inserts on both nodes can't have the same key. One way to do that for a table that can be inserted to from both nodes is to assign ranges 1 - 1 million for node 1, 2 million - 3 million for node 2 etc.

    You'll also need to have alerts in place so you know when Node 1 runs out of its assigned segment so it can be assigned a new one that won't conflict with node 2.

    Another solution is to modify the PK on these tables to add a 'location' field, so the insert on node one becomes record 1 location Node1, on Node 2 it becomes record1, location Node2. This can be a pain if you ever have to rebuild a node with a copy of a database from another node. You'd need to maintain scripts to set defaults for location key etc.

  • Thanks..which replication is better to implement two way sync as many tables are having pk indentity auto increment option

  • dastagiri16 (6/21/2016)


    Thanks..which replication is better to implement two way sync as many tables are having pk indentity auto increment option

    Can't help you here, we abandoned the whole idea because of its complexity at many levels, and went with Availability Groups instead. We now have the ability to fail over to the 2nd datacenter if needed. It was concluded that having that ability was more important than keeping 2 active write nodes going.

  • When I was Architect at Totaljobs Group we implemented P2P replication.

    One of our design criteria was to reduce operational risk. We already had our applications written to use separate connections for read and write operations, so we decided to nominate one of our P2P nodes as a write/read instance, and to nominate all other nodes as read-only. We used DNS aliases to identify the master write server and the read-only servers, and set up connection strings to use these aliases.

    We did consider allowing updates on multiple nodes, but decided the risks around having key ranges assigned to each node, particularly the risks relating to running out of keys, were just a disaster waiting to happen. The other alternative of changing all PKs to be multi-column, with one column holding an Id value of the node, was rejected due to the large number of changes required to the database and applications.

    Having P2P replication set up was a massive help when we moved from co-lo data centres to AWS in 2012. It allowed us to have the same data in 4 separate places, and gave a relatively easy way to change the master write server to AWS when the co-lo was decommissioned. It also allowed us to continue with our old SQL2008R2 instances at the co-lo while we used SQL2012 at AWS. We ran with P2P from 2012 to 2015, when things were changed to use AGs with SQL2014.

    Failover of the master write server with P2P does need some coordination. Changing the servers pointed to by the DNS aliases is fairly easy, but you must ensure the new locations are propagated to all nodes. The main problem is that the high-water-mark for identity values does not get updated on the read-only servers, so we wrote a process to reset all the HWM values at time of failover. For operational reasons we also ran some regression testing after failover but before opening access to end users.

    We felt that P2P gave us what we needed for resiliency and scale-out. It would have been nice to allow multiple-node update, but the work required to get the level of business risk we were happy with was just too great. As with all SQL replication, P2P can be a bit brittle when there are problems, which is part of why things were changed to use AGs. AGs have their own issues, but overall are less trouble than P2P.

    I suggest you look closely at what your business wants to achieve. Multi-node update may be an ideal situation, but what do they really need to do. What is the driving force behind thinking about P2P - is it DR, operational resilience, scale-out, or just a whim? When you understand what the business wants to achieve, you can design a suitable system. When you do this design, I also suggest you look to achieve it using AGs rather than P2P. The skill levels and risk levels needed to deal with AGs are definitely lower than P2P. Also, just about all future improvements from Microsoft will be focused on AGs and not replication, so AGs will get better but P2P stays as it is now.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • dastagiri16 (6/21/2016)


    Thanks..which replication is better to implement two way sync as many tables are having pk indentity auto increment option

    To be honest, multiple writable nodes and two-way sync is complex and trouble-prone enough that I recommend clients avoid it and instead write to a single node and read from multiple if necessary.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply