Will ALTER TABLE modify a column with the same type or skip it?

  • Hello, All

    Perhaps a silly question, but may significantly impact the business.

    We have recently found out that some columns across live deployments have a different type, e.g. country_id can be INT at one customer and BIGINT - at the other or country_name can be VARCHAR(100) here and NVARCHAR(100) there.

    We only found the problem because 3rd party product, querying these columns, was working for one customer and failed for another.

    The business took a decision to bring all the DBs to a common denominator and we have written ALTER TABLE scripts.

    My question if I run something like

    ALTER TABLE  products ALTER COLUMN country_id INT

    for a DB where country_id is already INT will the DB still run the update or if the engine is smart enough to recognise it has the same type and will skip the update, please?

  • I suggest that you do a check on the current datatype of the column and execute the ALTER only where the datatype is not an INT. Here is some sample code to demonstrate one way of achieving this:

    DROP TABLE IF EXISTS #SomeData;

    CREATE TABLE #SomeData
    (
    SomeCol VARCHAR(100)
    );

    DECLARE @CurrentType VARCHAR(50);

    SELECT @CurrentType = system_type_name
    FROM sys.dm_exec_describe_first_result_set ('SELECT * FROM #SomeData where 1 = 0', NULL, 0)
    WHERE name = 'SomeCol';

    SELECT @CurrentType;

    IF @CurrentType <> 'int'
    BEGIN
    PRINT 'Executing the ALTER now';

    ALTER TABLE #SomeData ALTER COLUMN SomeCol INT;
    END;

    SELECT @CurrentType = system_type_name
    FROM sys.dm_exec_describe_first_result_set ('SELECT * FROM #SomeData where 1 = 0', NULL, 0)
    WHERE name = 'SomeCol';

    SELECT @CurrentType;

    IF @CurrentType <> 'int'
    BEGIN
    PRINT 'Executing the ALTER now';

    ALTER TABLE #SomeData ALTER COLUMN SomeCol INT;
    END;

    You will see that the second PRINT statement does not get executed, after the column's datatype has been switched to INT.

    • This reply was modified 4 months, 1 week ago by  Phil Parkin.

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • If SQLServer detects the new definition to be the same as the current definition, it will not act.

    However, if there are data type storage alterations, it will have to rebuild the full table. ( fixed data types changing length, or var data types changing to MAX of from MAX  to a given length, changing to N(var)char, NULL, ... )

    Check SQL Server Data Types

     

     

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • I agree with Phil here on the process. BUT I would strongly suggest you do NOT make changes to production systems until you verify the changes on a test system.

    So my first step would be to build up the script you are wanting to run, be it a generic ALTER TABLE or the more complex, but safer approach that Phil provided, and then test that on a test system that is a clone of each of the live systems. Then you can verify that the change fixes the problem and doesn't introduce new problems. If it does fix it without making new ones, get someone who is familiar with the data to do some data validation and if everyone is happy, then go live with it. Doing it on test also gives you a good idea of the downtime that will be required.

    I would be cautious about changing to a smaller datatype. What I mean is changing from INT to BIGINT shouldn't cause any issues as ALL INTs fit inside BIGINT with room to spare. BUT if the datatype is already BIGINT and you are changing it to INT, it could fail.

    Same problem can occur if you go from NVARCHAR to VARCHAR.

    Now, offhand, I am not certain if it will fail due to an overflow OR if it'll succeed and set any overflow values to NULL as I've not tried it before. With the NVARCHAR to VARCHAR, you may lose some characters or have characters changed which is not desirable. But the BIGINT to INT I THINK will fail if there are any bigint values in there that don't fit in an int.

    My approach would not be to switch a BIGINT to an INT, but to go the other way.

    I would also be looking into that 3rd party application. If it is failing due to an INT overflow (for example), then casting your BIGINT to INT is going to fail. If the app is failing due to the NVARCHAR stuff, you will likely want to contact the app vendor to fix that bug and not change the database datatypes. I know if I was a user of a tool and the suggestion was to change from NVARCHAR to VARCHAR and they did that change without talking to me and without testing and I had all my multi-language strings come back in the English alphabet only, I'd not have very nice words for the team that made that change...

    Now, I do agree that your datatypes should be consistent across the databases as otherwise you can run into slowdowns during your QA process as you should be testing on all configurations. If the datatypes are the same across all databases and tables, then the QA process is a lot easier to do. Same thing with UAT - you wouldn't need to have every database tested, you could pick a few randomly and have the sample do UAT.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • If you implement this, I would do it in batches, in the columns you widen you are going to get some hellacious page splits

  • Thank you all for your help and suggestions.

  • Have you considered just making a large test table and testing it?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply