• Great Article! I used almost the exact same method about 3 months ago to change the clustered index on our largest table (which gets hit by hundreds of transactions per second) and it went off without a hitch. Change tracking is a great addition to SQL, I also use it to incrementally apply changes to hundreds of tables from several SQL servers to one Netezza data warehouse.

    The change I would make is that instead of doing inserts, updates, and deletes separately, you can do them all at once using the MERGE statement. This will sync up the data roughly twice as fast. For the below code sample, I generate the various <columnslists> using several UDFs that took the table name as a parameter and used syscolumns, sysobjects, INFORMATION_SCHEMA.TABLE_CONSTRAINTS, INFORMATION_SCHEMA.KEY_COLUMN_USAGE to get the intended field lists (with aliases hard coded in).

    MERGE NewTable AS p

    USING (SELECT <ColumnList> FROM CHANGETABLE(CHANGES OldTable, @last_sync_version) c

    LEFT OUTER JOIN OldTable o ON <o.PkList = c.PkList>

    WHERE c.SYS_CHANGE_VERSION < @CurrentVersion ) AS CT

    ON <CT.PkList = p.PkList>

    WHEN MATCHED AND CT.SYS_CHANGE_OPERATION = 'D'

    THEN DELETE

    WHEN MATCHED AND CT.SYS_CHANGE_OPERATION IN ('I', 'U')

    THEN

    UPDATE SET <UpdateFieldList>

    WHEN NOT MATCHED BY TARGET AND CT.SYS_CHANGE_OPERATION IN ('I', 'U') THEN

    INSERT (<InsertFieldList) VALUES (<ValuesFieldList>)

    OUTPUT $action;

    In practice, I actually put the results of the SELECT into a temp table first since it seemed to help with blocking, and put the results of the output clause into another table for logging purposes.