Decimal(18,0) or int?

  • Hi

    I have a column of data type decimal(18,0) can i change it to Int

    for the above case whether Performance improves if i change

    thanks

    with best regards

    Pradeep

  • mpradeep23 (5/25/2013)


    Hi

    I have a column of data type decimal(18,0) can i change it to Int

    for the above case whether Performance improves if i change

    thanks

    with best regards

    Pradeep

    First, it depends on the data in the column. If any of the data exceeds the max or min values for an INT value, then no.

    Second is this column used in a foreign key relating to other columns in other tables that are also defined as decimal(18,0). If so, you will want to convert those as well to prevent data type miss matches or implicit data conversions in queries.

    Third, why do you want to change this column from decimal(18,0) to int? An int value will use 5 byte less in storage (decimal(18,0) needs 9 bytes while an int uses 4), but is this really enough of a reason for the change?

  • Decimal 9 bytes int 5 bytes it may reduce the size for that case i am thinking of it

  • mpradeep23 (5/25/2013)


    Decimal 9 bytes int 5 bytes it may reduce the size for that case i am thinking of it

    Is the table so big that a 5-byte size reduction per row is worth the work and time required to make the change?

    Int is 4 bytes, not 5.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • mpradeep23 (5/25/2013)


    Decimal 9 bytes int 5 bytes it may reduce the size for that case i am thinking of it

    This doesn't really answer the questions I had posed.

    Based solely on this response, I would be hard pressed to recommend making the change.

  • wouldn't it also cause all values to round down? i.e 5.5 to 5, 6.1 to 6 etc....?

    [font="Times New Roman"]There's no kill switch on awesome![/font]
  • Loundy (5/25/2013)


    wouldn't it also cause all values to round down? i.e 5.5 to 5, 6.1 to 6 etc....?

    Decimal(18,0) doesn't allow values like 5.5 or 6.1, only whole numbers because the scale is 0.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • maybe I should learn to read 🙂

    [font="Times New Roman"]There's no kill switch on awesome![/font]
  • Now table as no data in feature table may get 1 million record per day for that case if i change decimal(18,0) to int is that improves the performance and storage space

    if any other issues pls let me know if i change from decimal to int

    thanks

  • mpradeep23 (5/25/2013)


    Now table as no data in feature table may get 1 million record per day for that case if i change decimal(18,0) to int is that improves the performance and storage space

    if any other issues pls let me know if i change from decimal to int

    thanks

    You still have not answered the questions I posted. We really can't provide you with good answers without knowing more about what you are doing and why you think you need to change the data type.

  • If the table's empty, then yes you can probably change the data type. You will have to drop all indexes first and recreate them afterwards.

    Improve performance, probably not.

    Save space, if the table gets around a million rows a day, then it's around 5MB less data a day, 150MB less a month. Pretty trivial amount of space.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Your proposed change will definitely improve performance (if the table isn't empty), because:

    1. It's less work to do with the table

    - less space means less I/O operations (all CRUD)

    - if the whole table can be read into memory, database can operate on it with better performance (database tries to do this all the time, because memory is much faster than disk)

    - even an execution plan of some queries can be different (faster), if the database knows, the table fits into memory if the table (it's row size) is smaller

    2. The same is true with any index created on the column. The more indexes exist on the column, the greater effect could be noticed. As Lynn says, in case of a referenced column, even the referencing columns should be reduced, which will bring even more boost with the other tables.

    3. Pure performance of mathematical operations on CPU mapped int is better than database managed numeric(18, 0)

    4. Backing up smaller amount of data is faster, but this would not be a reason for change for me.

    My questions would be:

    a) whether your numeric(18, 0) values would fit into int (roughly numeric(9, 0))? Should you consider bigint?

    b) how many columns are on the table?

    c) have you tested this change on your development database? Are the (preliminary) results promising?

  • ivan.peter (5/27/2013)


    - even an execution plan of some queries can be different (faster), if the database knows, the table fits into memory

    The optimiser never considers whether a table is in memory or not, or can fit in memory or not as part of its optimisation.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (5/27/2013)


    ivan.peter (5/27/2013)


    - even an execution plan of some queries can be different (faster), if the database knows, the table fits into memory

    The optimiser never considers whether a table is in memory or not, or can fit in memory or not as part of its optimisation.

    Have you read about In-Memory Hash Join (or this link[/url])?

  • ivan.peter (5/27/2013)


    Have you read about In-Memory Hash Join (or this link[/url])?

    The MSDN entry several times, as well as blog posts by members of the dev team. Written a few articles on joins too.

    Whether or not a hash table for a hash join fits into memory or not has nothing to do with whether the source table for the query was in the buffer pool before the query started execution.

    The optimiser does not consider whether or not the data for the query is in the buffer pool before the query starts. In fact, it assumes that none of the data required for the query is in the buffer pool and that all IOs will be physical.

    The optimiser also doesn't consider whether a hash join will be an in-memory or will spill (as mentioned at the end of the MSDN post). That has to do with the memory grant that the query gets for execution and the actual number of rows affected. It's the execution engine that decides, based on the rows it actually gets and the memory grant it gets, whether it can perform the hash join in memory or not.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply