TinyInt, SmallInt, Int and BigInt

  • Comments posted here are about the content posted at temp

  • Nice article, David, as usual...

    As you are aware, I suspect that any performance differences will only occur where joins of different data types occur due to implicit conversions... but I'm with you... size the columns correctly if for nothing else than to reduce the size of backups (the OTHER thing that designers always forget about).

    --Jeff Moden

    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • What happened to TinyInt?  Was it too small and thus overlooked?  It wasn't even mentioned in the article wven though it was included in the tag line.

  • I like the article. But I think the important of a JOIN performance, too.

    In my database, I have a table that has 14 foreign keys. So, I've a view that gets the information of this table joined with the 14 other tables. Maybe then the performance difference between these types are shown.

    So I try to use always the smaller type possible.


  • I think what you are missing though is that the point about the different datatypes is that you should use the smallest datatype that makes sense in order to keep your row size to a minimum. The smaller your row size the more rows you can get on a page, which definately does help performance.

    See http://www.sql-server-performance.com/nb_saving_space.asp for more info



  • However I would say plan for the long term. If you know you have a potential you will need the larger row size be prepared so you don't have to reconsider decission later. And as for the - I have use the following to del with printouts so I could still use the - side of the spectrum.

    A = -

    B = +

    Your application can handle the translation or even handle in a Stored Procedure, butthe later means a longer string must be passed instead of the shorter interger value.

    There are many ways to conserve space that are often overlooked and interger size is no different. Consider the fact that no matter how big or small your data is the value that is stored will always be the entire length of the datatype. So tinyint will be 1 byte, smallint 2, int 4 and bigint 8 event if the values is 0 it will be represented by all bytes for the datatype.

    Decent article I would just say it probably needs to be a little more focus on it's point.

  • Worthless in my opinion?

  • Care to formulate a more complete idea?!?!?

  • I wouldn't overlook the "natural" integer size of the processor( i.e.  32bit ).  During my numerical analysis classes they always stressed staying with float unless you needed the extra precission of a double.   Later when I got my first 386 I compiled my libraries with floats and doubles.   The functions using doubles where actually smaller   because to do a float calculation,   the internals would promote the float to a double,  do the calc then demote the answer.   

  • Thanx for the tip.  Is there any link you can provide with information on the low level performance tuning like this one?

  • Good article, but needs a spell check.

  • No,  I don't have any links handy.   You may want to try some "C" performance tuning articles.  Unless,  number range or memory size dictated otherwise you were generally better of just using the processor size of the int to avoid all the extra handling involved in other sizes(promoting, demoting,  word alignment ...) .  Infact in C,   the size of int is not fixed it is tied to the size of the processor.

  • Along time ago I came across something that said that VB always stored integers as 32 bit values regardless of whether they were Byte, Short, or Long (long in those days being 32 bit) and therefore you might as well use 32 bit integers in any case.

    I believe that this is no longer the case but I haven't got anything to back this up.

    Why didn't I run tests on TINYINT? Because with only 256 possible values I didn't think I could get any meaningful measurements with such a small sample. It would be a bit like trying to measure the time taken for an industrial water pump to empty a teacup

  • So what ?! just use microseconds instead of milliseconds in the results .

  • By publishing incomplete results, you may mislead readers. I would have preferred you did a more complete analysis of the issue you raised.

Viewing 15 posts - 1 through 15 (of 30 total)

You must be logged in to reply to this topic. Login to reply