• Sergiy (2/9/2016)


    Igor Micev (2/9/2016)


    Then, won't you meet that problem again?

    With my approach?

    No.

    Read it again:

    1. Create a new table with desired table design.

    2. Start populating the new table with portions of data from the existing table.

    Choose appropriate size of chunks, so you do not overwhelm your transaction log, and take TRN backups in between of INSERT statements.

    It may take many hours or even days - does not matter, as the original table is still operational.

    Make sure that any changes made to the data in the original table get copied to the new one.

    Where do you see anything about altering the existing table?

    While the data is being copied you have plenty of time to prepare and test all the scripts for indexes, privileges, depending objects, etc.

    When everything is ready - lock the whole table in a transaction, copy the last bit of freshly changed data, drop old table, rename new one and run the prepared script for the dependencies.

    If everything went file - COMMIT.

    Otherwise - ROLLBACK.

    Either way - it's done within seconds (as you modify only system catalogues), no noticeable interruption to the normal operation.

    You end up either with updated column, or (in case of a failure) with the same 2 old and new tables, and the background process which keeps synchronising data in them.

    Fix errors in your script and repeat the attempt in an appropriate moment.

    Not so different from my approach. I make a column drop and rename, and you make a table drop and rename. The prepared dependencies goes for creation after within a transaction, all steps. In case of fail the table is still alive.

    Igor Micev,My blog: www.igormicev.com