Change Tracking and Database Refactoring

  • Comments posted to this topic are about the item Change Tracking and Database Refactoring

    Luke C
    MCSE: Data Platform, MCP, MCTS, MCITP - Database Administrator & Database Developer

  • Edit. Step 3 at the bottom of the article states "Add all constraints to the new table and all referencing Foreign Keys. These were added with the "no check" option and then enabled. This avoided a lengthy recheck of all keys.". Although this method is fast, the foreign key will be in an "untrusted" state. More on that here[/url]. In the future we'll be creating the FK with the check option instead of no check.

    Luke C
    MCSE: Data Platform, MCP, MCTS, MCITP - Database Administrator & Database Developer

  • Hi Luke,

    Great article! Thanks very much for sharing.

    I was a little curious though... given that the CRUD statements are done automatically by your stored proc, where do you do the actual data refactoring (i.e. converting from non-unicode to unicode)? Did I miss something?

    Cheers,

    Dan

  • Hi Dan. In this example we are going from nvarchar data types to varchar. The Person.Person_nonunicode table contains these changes.

    Luke C
    MCSE: Data Platform, MCP, MCTS, MCITP - Database Administrator & Database Developer

  • Get formatting and explanation in the article.

    For a followup article, maybe you could go into more detail about where the changed data is located internally in SQL Server:

    FROM [person].[person] source

    INNER JOIN CHANGETABLE(CHANGES [person].[person], 0) ct

    ON source.BusinessEntityID = ct.BusinessEntityID

    WHERE ct.SYS_CHANGE_OPERATION = 'I' AND ct.SYS_CHANGE_VERSION <=4

    Explain the CHANGETABLE(CHANGES [person].[person], 0) forus.

    Thanks,

    Thomas

    Thomas LeBlanc, MVP Data Platform Consultant

  • Great Article! I used almost the exact same method about 3 months ago to change the clustered index on our largest table (which gets hit by hundreds of transactions per second) and it went off without a hitch. Change tracking is a great addition to SQL, I also use it to incrementally apply changes to hundreds of tables from several SQL servers to one Netezza data warehouse.

    The change I would make is that instead of doing inserts, updates, and deletes separately, you can do them all at once using the MERGE statement. This will sync up the data roughly twice as fast. For the below code sample, I generate the various <columnslists> using several UDFs that took the table name as a parameter and used syscolumns, sysobjects, INFORMATION_SCHEMA.TABLE_CONSTRAINTS, INFORMATION_SCHEMA.KEY_COLUMN_USAGE to get the intended field lists (with aliases hard coded in).

    MERGE NewTable AS p

    USING (SELECT <ColumnList> FROM CHANGETABLE(CHANGES OldTable, @last_sync_version) c

    LEFT OUTER JOIN OldTable o ON <o.PkList = c.PkList>

    WHERE c.SYS_CHANGE_VERSION < @CurrentVersion ) AS CT

    ON <CT.PkList = p.PkList>

    WHEN MATCHED AND CT.SYS_CHANGE_OPERATION = 'D'

    THEN DELETE

    WHEN MATCHED AND CT.SYS_CHANGE_OPERATION IN ('I', 'U')

    THEN

    UPDATE SET <UpdateFieldList>

    WHEN NOT MATCHED BY TARGET AND CT.SYS_CHANGE_OPERATION IN ('I', 'U') THEN

    INSERT (<InsertFieldList) VALUES (<ValuesFieldList>)

    OUTPUT $action;

    In practice, I actually put the results of the SELECT into a temp table first since it seemed to help with blocking, and put the results of the output clause into another table for logging purposes.

  • Whatever happened to alter table alter column?

  • Great suggestion Thomas! I may do that in the near future.

    Luke C
    MCSE: Data Platform, MCP, MCTS, MCITP - Database Administrator & Database Developer

  • Very cool ssandler! I didn't even think of using merge.

    Luke C
    MCSE: Data Platform, MCP, MCTS, MCITP - Database Administrator & Database Developer

  • Hi Rick. That's still a viable option but we wanted to avoid locking the table and minimize downtime during these changes.

    Luke C
    MCSE: Data Platform, MCP, MCTS, MCITP - Database Administrator & Database Developer

  • Nice article - I like this approach and will definitely experiment with it the next time I have a change on a large table.

    I think you can improve the conciseness and robustness of your synch proc by leveraging the PARSENAME(...) function for the object name manipulation near the top.

    Good job.

    TroyK

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply