I may need to re-think the use of table variables

  • Comments posted to this topic are about the item I may need to re-think the use of table variables

  • I don't really see the point of table variables.
    How are they an improvement on temporary tables? Is the absence of CREATE TABLE (or INTO, if you that way inclined) and DROP TABLE really a clinching argument?

  • One of the benefits of table variables is that they don't cause recompiles - whereas adding data to a temp table can.  Now, this is the crux of the issue - without a recompile there is no way of knowing how many rows are in the table (hence bad stats), but the saving in overhead of doing the recompile can be a performance boost.  This is why when they were introduced in SQL 2000, many people flocked to them in droves to 'improve' the speed of procedures that were using temp tables.  It's more likely that statement level recompilation in SQL 2005 had a better effect.
    So as ever, you need to weigh up the options of all possibilities.

    Another benefit is that they survive transactions - so if you are logging some progress and then have to rollback, the table variable is unaffected (as are all variables)

  • If you understand the limitations of table variables then you have a tool you understand. You will understand when they will cause problems and when they won't. I don't shy away from chisels because I cut myself trying to use one as a flat blade screwdriver, I now know what chisels are for and that they are useful, nay nay essential, in certain situations.

  • As a rule of thumb I use table variables like scrap paper: Write a few lines, do not depend on it, use for a small set of data only.
    Say less then 10,000 rows of data, no real indexing needed, then a table variable will do the job just fine.

    Anything above that needs a proper bound notepad, ehm, temporary table. Might look a bit rough around the edges in naming scheme, just absolute necessary indexes only.
    And a table is like a book - properly bound with a good cover and back, has one or more indexes (whereas the book has indices) and much thought has gone into the naming (grammar rules).

  • Wayne Sheffield posted a good article here on SSC
    Comparing Table Variables with Temporary Tables

  • Thanks Ben, glad you liked the post!

  • This article is just a few days late (how inconsiderate of you). Last week I inherited a monster stored proc with major performance problems (SQL 2012). After a few wasted hours of analysis, a coworker suggested replacing the table variables with temp tables. Run time went from 11.5 minutes to 6 minutes! And we're not talking millions of rows in the table variables; more like tens of thousands. Lesson learned. Will revisit in 2019. Thanks for a useful article!
  • When articles like this come up (i.e. "this feature sucks at scale") I shake my head.

    First, why weren't they testing the feature at scale?

    But more importantly, it brings up the KISS principle yet again. All these niche features might solve one (miniscule) class of problem but there are so many of these little specialty features! As a result SQL Server is becoming more bloated all the time with a large chunk of features that are more death trap than savior, performance wise.

    At the same time the basic engine (the old CRUD statements) tend to have a lot of attention poured on them, meaning they tend to be blazingly fast. It's the same old same old, developers want to write new flashy features that are interesting and fun and tend not to want to polish the old stuff.

    The result is a big messy glob of features that half-work, piled on top of the core engine.

    I have always said a few well chosen tools are worth a toolbox full of junk. Keep it simple. Avoid flashy new gimmicks. I mean, a 50% speed increase simply from shifting from table variables to temp files? Really?

    A lot of SQL Server features are like that, half-baked or incredibly narrow in scope. The SQL Server team needs to focus on elegance, not a laundry list.

  • I only use table variables for stored procedures that I know will query only a relatively small number of rows. For example, a procedure that gets called by a high traffic web application or call center application. With few exceptions, it has no place in the realm of reporting or ETL where a temp table is a more appropriate choice. 

    The new cardinality estimator for table variables is an improvement, but perhaps Microsoft could also extend the OPTIMIZE FOR query hint, allowing us to override the default cardinality assumption of 1 for table variables (ie: optimize for X number of rows).

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Could be a good research editorial , write some similar for User-Defined table types

  • I wasn't aware of tables variables, until I read your article, Ben. So, no, I've never used them. I've either used temp tables or CTE, but only when needed. We've got a third party app with its own SQL database. That SQL Server database uses CTE's a lot. But we're under strict orders to leave their SQL scripts, stored procs, etc., alone. This year we're finally getting SQL 2016 installed on some of our database servers. We've got a lot of SQL databases and servers, so this is taking a couple of years to implement. Most of our SQL databases are at 2012, or older. I'm guessing we won't upgrade to SQL 2019 for another two to three years.

    Kindest Regards, Rod Connect with me on LinkedIn.

  • I had not very good experience on production, the row count varies too much. In the end, i changed to temp table.

  • There's always a trace flag to help with this.

    Rather than adding option (recompile) to the end of every select that references the table variable (and therefore destroying the advantage of table variables) there's the trace flag 2453 which Brent has written an article about and Microsoft has issued a support notice about here

    I'd be interested to hear from anyone who has implemented this trace flag and what their experience has been.

  • This reminds me of my studies of normalisation.. one of the rules of normalisation is that you should not design your database to accomodate any specific query pattern..

    But sadly, we hit performance blcoks and end up using coding patterns to bypass our platform's inherent issues.I genuinly think ... it's not the platform, it's the coder! we all inherit bad code from legacy developers and then start trying to undo performance bottlenecks without breaking functionality 🙁

    temp tables and table variables  have a place... i'd rather see a decent indexed join with a nice foreign key rather than throwing it into tempdb because we can't figure it out.... but it's not an ideal world when we inherit code from others.

    of course.. if you have to deal with 20 years of linked servers embedded everywhere and scalar functions in where clauses when you start your new job, table variables are a quick win

    MVDBA

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply