Numeric vss Bigint

  • I want to create a table with a Identity primary key column of datatype Int. But I am sure that the Precision of that column exceeds 10. So I thought of changing the datatype to Bigint. But some of the developers suggested to change the datatype to Numeric. When I cross checked some of previous tables I found that they have used numeric for storing data which they are sure that will exceed precision of 10.What benefits do I get if I change to numeric from bigint or is there any difference b/w numeric and bigint to use in Identity column?

    Ex:

    CREATE TABLE TEMP (A1 bigint identity(1,1))

    go

    drop table temp

    go

    CREATE TABLE TEMP (A1 numeric identity(1,1))

    go

    drop table temp

    go

  • The default declaration for NUMERIC (also known as DECIMAL) is 18,0 and takes 9 bytes to store. The max precision for NUMERIC at 9 bytes is 19,0 so if you don't declare it, you making a 1 digit mistake.

    Bigint is good for almost 19 digits... 9,223,372,036,854,775,807 and only takes 8 bytes.

    The extra byte required by decimal is to figure out where the decimal point is. Since you can't use a decimal point in IDENTITY columns, I think I'd go for the BIGINT. Of course, that's not ANSI standard where the used of NUMERIC is. I just don't believe in the myth of portable code so I tend to just ignore ANSI Standard SQL and use all the features of whatever language I'm writing in.

    Decimal does have the advantage of being able to go out 38 digits. If you ever get past 15 digits, lemme know.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I'd go with bigint rather than numeric for an ID column. If you somehow find you need more rows than bigint can deal with, you're probably better off with GUIDs anyway.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Since by the knowledge I got by reading these two replies I think there is no harm in using numeric but Bigint is the always better choice than numeric.

  • Will INT not do?

    IDENTITY(-2147483648, 1)

    will give you 4 billion numbers to choose from.


    N 56°04'39.16"
    E 12°55'05.25"

  • Heh... I've gotta agree there... 2 billion is usually more than enough, never mind 4 billion... what is it that you're storing?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Having 4 billion ID's to choose from (-2147483648 to 2147483647), you can insert 10 records per second, every second around the clock, for the next 13.6 years.


    N 56°04'39.16"
    E 12°55'05.25"

  • If you start a load testing batch process in 10 parallel threads and let it run for an hour and a half, you exhaust your number range...

    Even worse: You start up the same by accident, cancel and roll it back after an hour. Now you haven't even realized you almost ran out of numbers and you get the error the next day, having no idea why.

    Just my 2 cents.

  • Which begs the question of why you would run a load-testing batch process on a production database. (Running it on a test database wouldn't use up any IDs at all in the production database.)

    If you plan on running that kind of load, use GUIDs, not numeric IDs. 16-bytes per row instead of 4 or 8, but you aren't going to run out any time soon no matter how heavy your database traffic is.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Beat me to it, Gus...

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • grevesz (6/9/2008)


    If you start a load testing batch process in 10 parallel threads and let it run for an hour and a half, you exhaust your number range...

    You have an application that can insert 79,536 records per second?


    N 56°04'39.16"
    E 12°55'05.25"

  • From what I understand, numeric is calculated as base 10, so it's emulated. bigint/int are actual native binary value types. Simular to the difference between a string and a char.

    make a big int and set -9223372036854775808, 1.

    If you where inserting 1,000,000,000 (one billion) records per second, it'd take 584.94 years to use up your range and on a 64bit comp, it takes ~3 clock cycles to load and compare 2 64bit numbers

    Actually, cpus have a 64bit register that they use to count clock cycles since the cpu reset. Assuming a 3.4ghz cpu, it'd take 172.04 years of running 24/7 before the integer would overflow back to 0

  • Peso (6/10/2008)


    grevesz (6/9/2008)


    If you start a load testing batch process in 10 parallel threads and let it run for an hour and a half, you exhaust your number range...

    You have an application that can insert 79,536 records per second?

    If nothing else you would wind up with severe contention and allocation issues here which would prevent that level of activity I bet. That is just a staggering amount of inserts per second steady state!

    I am with the others here: if you need more than 4B values, use bigint and seed it at the lower limit.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru (6/11/2008)


    Peso (6/10/2008)


    grevesz (6/9/2008)


    If you start a load testing batch process in 10 parallel threads and let it run for an hour and a half, you exhaust your number range...

    You have an application that can insert 79,536 records per second?

    If nothing else you would wind up with severe contention and allocation issues here which would prevent that level of activity I bet. That is just a staggering amount of inserts per second steady state!

    I am with the others here: if you need more than 4B values, use bigint and seed it at the lower limit.

    I have seen cases in which the App "RESERVES" several million records that may or may not get inserted in the DB you could potentially reach the limits easily there. There are "legacy" reasons also to avoid negative numbers and break stuff that was not coded to handle such things ... all in all if you have the choice to start fresh int seems ok, but if you are dealing with special scenarios you can exahust it; that's why we have bigint ,right? 😉


    * Noel

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply