• Sergiy (2/8/2016)


    Igor Micev (2/5/2016)


    This is not a good thinking. What if you have many schema bound objects with that table?

    The advise from ScottPletcher is just fine. Changing from smallint to int should not be a serious problem, unless maybe the time required to finish it all.

    You obviously don't know what you are talking about.

    You never tried it on a big table, so for you it "should not be a serious problem".

    Hm, not true. I've tried it on huge and small tables. It depends on the updates how frequently are done in the table.

    Increasing capacity of a column in table means a split on every page occupied by the table.

    Which means effectively doubling the space needed to accommodate the table.

    With corresponding requirements to the log file.

    With "maybe the time required to finish it all" - hours and hours for the system to be offline.

    If it's a few-columns table, then for sure it won't take for hours, even not for minutes. If it's a 20+ columns table, then space won't be doubled by just increasing one column from smallint to int. And yes, the transaction log file will increase alot.

    Just to find out that after 20 hours the system has run out of space and started reversing the transaction.

    Which will take almost twice as long.

    By the end of the week, when the system is back online, your stuff is already packed into a box and ready for you to carry it away.

    Scott expects something to happen, that's why he included the item 4 into his plan, which repeats my plan in shorter terms.

    It's not the point to do the operation in such long time.

    What if your table get updates every millisecond? Such tables by nature should be small, otherwise your table will be huge. You usually have scheduled archive data process.

    If the table is just big and with not frequent updates, then your approach is just fine.

    As for dependencies - it's not that hard to find them and script dropping and recreating.

    Actually, SSMS can do it all for you.

    Open the table in Design view, alter the column type and do not save but hit the "Generate Change Script" button - here is the script you can start from.

    But schema bound objects is not the only one associated problem you need to solve.

    Any procedure, any view or function, any script in SSIS which mentions this column needs to be revised not to generate run time errors or performance issues due to implicit data type conversions caused by the new data type used for the column.

    Changing schema of a live database should not be taken lightly.

    This very depends on the density of the dependencies with the other objects in the database. There are databases with 50 tables and with 500 tables. It's different.

    I'm aware of your approach, anyway.

    How would you do your approach for table that is updated in the second? You may lose consistency of data, especially if the table points many foreign keys in other tables.

    Finally, if the operation lasts for a time that is not acceptable, then this is also a good approach:

    My favorite choices now, in order:

    1) do nothing, tell the one customer no

    2) add a column with proper data type, rename old column to something like "archive", rename new column to old name, check all dependencies, copy most recent months of data into new column from archive.

    Igor Micev,My blog: www.igormicev.com