• opc.three (3/6/2013)


    I'll second that. I almost exclusively favor traditional temporary tables over the use of table variables. Performance is one reason, which you may be experiencing first hand, and ease of use when debugging is another major one. To be fair, there are a couple scenarios where I will use a table variable over a temporary table but they are corner-cases.

    Here is a great article comparing and contrasting the two classes of object:

    Comparing Table Variables with Temporary Tables By Wayne Sheffield[/url]

    The key part of the article @opc.three cited is this, I think:

    1) SQL cannot build statistics against a table variable, like it can against temporary tables. This means that the execution plan for table variables will always treat the table as if it has 1 row. Obviously, this means that it might not use the best execution plan.

    You have a table variable that has over 600K rows. SQL Server constructs an execution plan for your query based on an estimation of 1 row in that table variable. Operations that are efficient for 1 row may be extremely inefficient for 600K rows. Eliminating the table variables would probably shave a significant amount of time off your 50-second query response, not to mention the effect on the variant with the final INSERT into another table variable.

    I would venture a guess that adding the final INSERT to another table variable caused SQL Server to make that one additional "bad" choice about the execution plan ("bad" only because it is based on an inaccurate estimate of 1 row) that tips the whole thing over into "forever" response times. Please post a reply to let us know how you resolve the issue - it's nice to see our suggestions work for someone, and even when they don't, we may learn something, too!

    Jason Wolfkill