Slow performance with ALTER COLUMN

  • I am working for a company that has a 3rd party vendor application.

    The Table that having an ALTER Column run against it is a 300 column heap table (import only).

    The vendors application issues a number of ALTER TABLE ALTER COLUMN (sometimes 50-60 separate columns) for each import, The change is the data type length eg Decimal (18,2) to (30,10).

    These seem to be taking a long time to execute especially when the table has around 4GB of data in it.

    Is it possible that the problem is being caused because the data requires extra pages so the engine is spooling the existing table into the TempDB, and back again once the space has been allocated or is it a meta data only operation?

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

  • Jason-299789 (5/29/2014)


    I am working for a company that has a 3rd party vendor application.

    The Table that having an ALTER Column run against it is a 300 column heap table (import only).

    The vendors application issues a number of ALTER TABLE ALTER COLUMN (sometimes 50-60 separate columns) for each import, The change is the data type length eg Decimal (18,2) to (30,10).

    These seem to be taking a long time to execute especially when the table has around 4GB of data in it.

    Is it possible that the problem is being caused because the data requires extra pages so the engine is spooling the existing table into the TempDB, and back again once the space has been allocated or is it a meta data only operation?

    Not really answering your question, but surely altering the table with data in it is liable to failure, if any columns ever shrink?

    If this is an 'import table', I would have expected it to be empty immediately preceding each import ...

    But I guess the vendor has reasons.

    If the columns are only ever getting bigger, why not make them all huge right now and do away with the ALTER issue?

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • Thanks for the reply Phil. I would agree about setting them to a max size and leaving it, but we have no control over the database. It seems the 3rd Party application had a number of templates for import from Excel/CSV and these vary depending on the supplier of the original template, thus the changes to columns.

    I personally wouldn't develop that or after load process the data, then truncate the table and start on the next import format.

    From what I've read the DECIMAL data type is a Fixed length, which makes sense, so there is likely to be a lot of disk IO when making a change to a column to increase as data after the column has to be shuffled, while on the decrease side, a scan is performed to ensure that the data will fit into the new scale and precision.

    It doesn't help that 40+ columns can change and they are each being run as an ATLER TABLE ALTER COLUMN.

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

  • Jason-299789 (5/29/2014)


    Thanks for the reply Phil. I would agree about setting them to a max size and leaving it, but we have no control over the database. It seems the 3rd Party application had a number of templates for import from Excel/CSV and these vary depending on the supplier of the original template, thus the changes to columns.

    I personally wouldn't develop that or after load process the data, then truncate the table and start on the next import format.

    From what I've read the DECIMAL data type is a Fixed length, which makes sense, so there is likely to be a lot of disk IO when making a change to a column to increase as data after the column has to be shuffled, while on the decrease side, a scan is performed to ensure that the data will fit into the new scale and precision.

    It doesn't help that 40+ columns can change and they are each being run as an ATLER TABLE ALTER COLUMN.

    It seems that you have no control whatsoever over the schema of this beast. So maybe instead you can do something about the data which is in it at the time that the ALTERs get executed?

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • Thanks Phil, I think that might have to be the solution, I'll have chat with the manager to check on the process that they follow when importing data and see if it can be amended to make it more efficient.

    ultimately the import application should truncate the table for each load then run the scripts, so that's another avenue to explore.

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply