• I know I'm a bit late with this question but hopefully someone still reads this article (which I found fantastically useful). 🙂

    Let's say I have a table replicated from publisher to subscriber. I need to update a MASSIVE amount of rows, by nullifying certain columns no longer used (unfortunately can't drop the column itself as it would force downtime when re-snapping). I've created a proc in our QA environment, replicating the execution as in the article (not serializable) that updates the rows on the publisher side.

    Now what I have down, on the subscriber side, the proc which was replicated, I have now wrapped in a huge comment block, so when it is executed in the publisher, the execution call to the subscriber does nothing, but still runs. The idea is that the subscriber keep those columns filled in, to be possibly null'ed later on.

    My big question, is there an issue having the actual data different between the two tables, even though the number of rows are the same, and all other data is the same, especially going forward with standard updates/inserts/deletes?

    Thanks!

    Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein