Thanks for writing this article. I have been meaning to read it since it posted about a week ago. Very good advice for anyone stuck in the position of needing to clean up massive change/log files that have accumulated. As it happens, today I am onsite at a client where we manage an Upsert process (that includes dealing with - argh - deletes on the source system) and as the target tables average between 750 million and 1.2 billion rows (yes, properly tuned, SQL Server does scale well), we are VERY concerned with the constant expansion of our change_log tables. We employ a *fifth* strategy which you didn't mention, which is to create and implement a process to constantly remove records older than *n* number of days from the change_log tables - we simply prune them on a daily basis via tasks late in the ETL, and occasionally write the remaining data out to temp tables and back in as a way to manage fragmentation on the disk.