Replaced deprecated types (text, ntext) with Varchar(max) and NVarchar(max) !

  • in 500 tables, on thousands of columns in all databases on entire server, I have

    replaced deprecated datatypes (text, ntext) with Varchar(max) and NVarchar(max).

    To detect them and to generate modification scripts, PLEASE HELP YOURSELVES to my and Victor's attached new SP named asp_Get_DeprecatedDatatypes_Columns_withgeneratedmodifyDDL

    /*Usage examples-are-in-comments-header, you can compile and run it in any database */

    It goes without saying that I anticipate some issues start popping up. May be Transformation tasks failing in some SSIS packages.

    + some other operations in T-SQL and other (external apps) code, too.

    Have you experienced any such issues following the replacement of deprecated types (not to be supported by future versions of SQL Server) to the new datatypes?

    Please share what kind of issues, if any.

    Likes to play Chess

  • First, nothing has been attached that I can see.

    Second, if you did this without the understanding that now all the stuff that might fit "in-row" is now "in-row", which greatly expands the size of the Leaf Level of ALL your clustered indexes, you're going to be in for a huge surprise as to how slow your scans will have gotten, how much extra memory you're going to use because the scans now load all the new "in-row" stuff into memory, how quickly your indexes become fragmented because of the inevitable "ExpAnsive Updates"  that will now occur "in-row", and how much more memory and disk space the inevitable "Trapped Short Rows" cost you.

    This is all because the old datatypes used to default to be "Out-of-Row" and all the new datatypes default to "in-row".  Microsoft didn't document that "little" nuance so well. 🙁

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • As Jeff states, there are so many things that prevent you from doing a simple ALTER TABLE ALTER COLUMN.

    I have recently put my sp_AlterColumn on GitHub. I have used it since before 2012 and in 2012 I decided to make it a single solution because it was three different solutions before.

    However, it detects all kinds of obstacles such as datatype defaults/rules, column defaults, check constraints, indexes, and foreign keys among other things.

    sp_AlterColumn produces a list of executable statements and decides in which order they should be executed.

     


    N 56°04'39.16"
    E 12°55'05.25"

  • not sure why attachment did not attach.. (it was a regular Text file, txt). May be because file name was too long.

    Attaching the SP again. Renamed it to shorter name. I find it very useful.

    Attachments:
    You must be logged in to view attached files.

    Likes to play Chess

  • JEFF,

    so what do you suggest then?

    (RE: //This is all because the old datatypes used to default to be "Out-of-Row" and all the new datatypes default to "in-row".  Microsoft didn't document that "little" nuance so well. )

    The

    DEPRECATED  TYPES must be replaced, right?  it is  not an option for migrating to higher version, or is it..?

    Likes to play Chess

  • Yes... the deprecated types should be changed to the modern types... or at least according to MS.  So here's how to fix some of the mistakes MS made when creating the MAX (and other LOB) datatypes.

    You obviously know how to find the MAX datatypes so I don't need to show you that.

    You also know how to use dynamic SQL to fill in the proverbial blanks.

    Here's what you need to do described as generic code that will need schema, table, column, and clustered index name changes.  Read the comments carefully.  If you have any questions, ask before you even think of running the code.  You should also incorporate the pieces into the code you posted.

    To summarize, this will move the MAX datatype columns you identify to "Out-of-Row", which I'll say is what you need to do about 99% of the time whether they've been converted from the old LOB types to the new MAX types or not.

    /****************************************************************************************
    Before you start any of this for real and IF the database has no form of replication or
    AG or anthing else that relies on staying in the FULL Recovery Model, you can take a DIF
    backup (to be safe), slip into the BULK LOGGED Recovery Model, to all the work, which
    includes some index REBUILDs, go back to the FULL Recovery Model, and take another DIF
    backup. This will make your index rebuilds run nearly twice as fast and keep your log
    file from growing massively (thanks to "Minimal Logging").

    Remember that you will need to do this for each table you have a MAX datatype in and
    you need to change all occurances of following to the correct names for the table you're
    working on...

    tableschemaname.tablename - The schema and table name being worked on.
    columnname - The column name with the MAX datatype you're working on.
    clusteredindexname - The name of the clustered index on the table you're working.
    - If it's a HEAP, then you'll need to change the code to REBUILD
    the HEAP instead of the clustered index.
    ****************************************************************************************/


    --===== Set the "Out of Row" table option.
    -- This make make any new values in the MAX columns automatically go out-of-row
    -- but it won't move any existing data. We'll get to that in a minute.
    EXEC sp_tableoption 'tableschemaname.tablename', 'large value types out of row', 1
    ;
    --===== Redefine each MAX column to have a single space default,
    -- which will add the 16 byte pointer to new rows even if the MAX
    -- columns isn't used so that there will be no "ExpAnsive UPDATEs"
    -- when someone does add data to the column.
    -- Note also that you will have to check code to see if it relies on
    -- NULL values for search criteria and NOT do this if it does.
    ALTER TABLE tableschemaname.tablename
    ADD CONSTRAINT DF_columnname_As_Space DEFAULT ' ' FOR columnname;
    ;
    --===== Update the existing data to get it to move "Out-of-Row".
    -- The ISNULL changes the existing NULL rows to the default.
    -- If you have more than one MAX column in the same table,
    -- then add all of those columns to the SET directive in the
    -- code below.
    -- Note, again, also that you will have to check code to see if it
    -- relies on NULL values for search criteria. If it does, then leave
    -- out the ISNULL on those columns
    UPDATE tableschemaname.tablename
    SET columnname = ISNULL(columnname,' ')
    ;
    --===== Rebuild the index with whatever the existing FILL FACTOR is
    -- to recover the space left behind by the now Out-of-Row LOBS.
    -- Do NOT use REORGANIZE for this... it will cause your Log File
    -- to explode on large tables and still not do as good a job at
    -- the space recovery not to mention the huge amount of time it
    -- will take. If doing a REBUILD with the ONLINE option is available,
    -- I recommend NOT using the ONLINE option for similar reasons.
    ALTER INDEX clusteredindexname ON tableschemaname.tablename REBUILD
    ;

    An alternate way of doing this would be to make a copy of the table with the changes needed to force the MAX type columns out of row and then copy the data into the new table and move you indexes and constraints , etc, to the new table.  That would make it so that Minimal Logging is possible even for the data but it's also frequently over kill.  As with all else in SQL Server, "It Depends".

    Or not... I'm basing the slowness and extra resource usage on what I found to be true for the databases I work with.  I have non-clustered indexes that are larger than some people's entire databases.  You may not have to do any of the "move the existing data to Out-of-Row" thing if your baseline hasn't changed much (although it would be a good thing to do to handle future scalability).  You may not have to do a thing or you might be able to get by with just setting the table option.  So check before you commit to doing the extra work on this one. 

    The cool part about this is that even if you've not caused a performance/page density issue with your original conversion, you might still want to do this and even include some of the shorter columns (like a Modified_By column) to seriously help your database become more fragmentation resistant by getting these big columns that are subject to "ExpAnsive Updates" out of the Clustered Index.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply