Replicated database's log file grows exponentially with a few row deleted/inserted

  • I have a publisher database set up for a merge replication. This is using parameterized filter with join filters.

    I also have a stored procedure that does deletes & inserts on the table where the parameterized filter is applied to aid in changing a subscriber's eligibility to receive so and so data. I have observed that running the stored procedure takes extraordinarily long and as a result, the log file grows to a size 1.5 - 2.5 times the database size.

    At first I reasoned that this might because I had it set up to use precomputed partitions and changing it requires recalculating the partitions. As a test, I turned off the precomputed partitions. Didn't help. I turned on "optimize synchronization" AKA "keep_partition_changes", which normally is not available when you have precomputed partition on, and that didn't help, either.

    At this point, I think I can rule out precomputed partitions being a problem here but I'm stumped now what else I should do to reduce the amount of log writes being required. We do need the parameterized filters & join tables, so that can't go.

    Any suggestions/ideas?

Viewing 0 posts

You must be logged in to reply to this topic. Login to reply