Estimating time for schema expansion

  • I have a schema with 9 fields originally set at char(32) and will be expanded to char(64).

    I'm trying to figure out which databases this expansion will take the longest on so I can determine the high risk ones.

    What factors matter and make a difference?

    * The number of records in the database

    * The number of records with data in the 9 fields

    For example, if I have a million record database but only a thousand rows have data in the 9 fields, would updating this schema be faster than a 500K record database with 200K rows with data in the 9 fields? Or does content in the fields not matter and it's just the size of the database that will be the factor?

  • Mindy Hreczuck (2/3/2015)


    I have a schema with 9 fields originally set at char(32) and will be expanded to char(64).

    I'm trying to figure out which databases this expansion will take the longest on so I can determine the high risk ones.

    What factors matter and make a difference?

    * The number of records in the database

    * The number of records with data in the 9 fields

    For example, if I have a million record database but only a thousand rows have data in the 9 fields, would updating this schema be faster than a 500K record database with 200K rows with data in the 9 fields? Or does content in the fields not matter and it's just the size of the database that will be the factor?

    Quick thought, apart from the hardware factors such as CPU, IO performance etc. there are quite few other factors which can and will affect the outcome. This aside, the number of rows holding any values are irrelevant as the columns are fixed width/length.

    😎

Viewing 2 posts - 1 through 1 (of 1 total)

You must be logged in to reply to this topic. Login to reply