• TheSQLGuru (8/21/2009)


    Just to make clear, it isn't JUST about the number of rows you put into a table variable/temp table that matters. I can also be the actual VALUE(s) therein. For example, if you have a table variable with just 1 row, and field1's value is 'abc' and you join that table to another table on field1, you can get horrible performance if the data in the joined table is skewed such that a large percentage of the values are 'abc' due to nested loop plan. A temp table could have stats on field1 and discover that a hash/merge join would be much more efficient.

    As Jeff Moden says, test it.

    I just ran a series of tests. I'm going to try to put together some more. The basic results were that regardless of the value or the type of table, temp or variable, I got identical execution plans. However, I'm not saying you're wrong. I did see differences in the estimated values, not for the temporary table or the table variable, but for the table they were joined against. The variation wasn't sufficient to result in a different execution plan in the tests I've run so far. I'm going to try for some more extreme data sets to see if I can force a difference.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning