Best practice when removing a large amound of data in published database

  • We have a large object (100 million or more rows) that is replicated with transaction replication. This table is also the largest object in the database in terms of storage (90%)

    Once in a while I need to update all the rows or delete a large number of them. Waiting for them all to get pushed is problematic at best and I end up with huge t-logs and t-log backups while it is trying to push.

    So I generally blow away the publication and re-create it and re-establish the subscription. Yes it is very brute force, but it gets the job done.

    Is there a more elegant solution I am missing? I suspect there is.

    Thanks

    Henry

  • You could disable the publications, perform those tasks, re-establish the snapshot, and then re-enable the publications.

    Similar I guess to the drop/rebuild, but a little less painful.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Have you tried dropping the article for the table that you are doing the deletes in, doing the deletes and then adding the article back to the publication. This worked in my dev environments when faced with the same challenge.

    John.

Viewing 3 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply