Performance of Temp table vs Variable Table in a Procedure

  • I have a procedure which returns 5000+ records and has a code of 500+ lines with mixture of different tables and huge logic. I used Variable table and results used to come in time. But at certain point of time my query has became very slow and took 30 mins to execute. I made many changes of performance but didn't work out. At last i removed the table variable and kept Temp table and executed it. Results came in seconds.
    On  my analysis  giving the  parameters  directly to the procedure and executed, variable table stopped after getting a result of 20+ results and kept on running running. My question is how does variable table and temp table works internally. Why for table variable it was keep on running for long time.

    Thanks
    Vasu

  • Table variable aren't great. Regardless of how many rows you're going to INSERT into them, the query planner will always estimate the number of rows within the table as 1. This means that the query plan isn't going to allocate correct resources to run your query, and almost everything will likely ending up spilling into TempDB, rather than the server doing as much as it can in RAM.

    Temporary tables have much better estimates, and thus a better query plan can be generated and better resources devoted to your query.

    Someone will be able to give a much better/in detail answer than I, but a quick point of why there is such a difference in the two scenarios. Basically, the end answer is use Temporary tables, rather than variables, especially when dealing with more than one row.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Yes, lack of statistics on table variables is likely to lead to inappropriate memory grants and inefficient plan choices.  Since SQL Server 2014, the query optimizer estimates the number of rows in a table variable as 100 instead of 1.

    John

  • John Mitchell-245523 - Friday, March 3, 2017 3:00 AM

    Yes, lack of statistics on table variables is likely to lead to inappropriate memory grants and inefficient plan choices.  Since SQL Server 2014, the query optimizer estimates the number of rows in a table variable as 100 instead of 1.

    John

    Interesting. I did think they had changed it, however, I had a test on our SQL 16 dev machine before posting the above, and it still showed 1 row estimates in the plan so I assumed I "remembered" incorrectly. Odd.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Thom A - Friday, March 3, 2017 3:10 AM

    Interesting. I did think they had changed it, however, I had a test on our SQL 16 dev machine before posting the above, and it still showed 1 row estimates in the plan so I assumed I "remembered" incorrectly. Odd.

    What compat mode is the database that you tested in?
    Compat mode 2012 or before, and you get the old CE.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • That

    GilaMonster - Friday, March 3, 2017 3:14 AM

    Thom A - Friday, March 3, 2017 3:10 AM

    Interesting. I did think they had changed it, however, I had a test on our SQL 16 dev machine before posting the above, and it still showed 1 row estimates in the plan so I assumed I "remembered" incorrectly. Odd.

    What compat mode is the database that you tested in?
    Compat mode 2012 or before, and you get the old CE.

    That would explain it. Thanks 🙂

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Thanks Friends for good clarification. In many of procs they are using table variable as the result set is very less they are working fine but still in further it may leads to slow so almost I'm converting them to temp tables.

  • John Mitchell-245523 - Friday, March 3, 2017 3:00 AM

    Yes, lack of statistics on table variables is likely to lead to inappropriate memory grants and inefficient plan choices.  Since SQL Server 2014, the query optimizer estimates the number of rows in a table variable as 100 instead of 1.

    John

    Funny enough, on 2014 Express in 120 Comp. mode I only get 1 from the CE for the table variable, don't have any other instances of 2014 running as I've migrated everything to 2016 and vNext
    😎

  • Yeah, I get the same on 2014 Developer.  And I can't find any authoritative reference for the 100 row thing, either.  Hmmmm.

    John

  • John Mitchell-245523 - Friday, March 3, 2017 5:59 AM

    Yeah, I get the same on 2014 Developer.  And I can't find any authoritative reference for the 100 row thing, either.  Hmmmm.

    John

    Mmm, created a 130 on the dev server, made a bunch of trash data and also still get estimate of 1.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • John Mitchell-245523 - Friday, March 3, 2017 5:59 AM

    Yeah, I get the same on 2014 Developer.  And I can't find any authoritative reference for the 100 row thing, either.  Hmmmm.

    John

    No server wide traceflags in the mix?
    😎

  • John Mitchell-245523 - Friday, March 3, 2017 3:00 AM

    Yes, lack of statistics on table variables is likely to lead to inappropriate memory grants and inefficient plan choices.  Since SQL Server 2014, the query optimizer estimates the number of rows in a table variable as 100 instead of 1.

    John

    The advantage of table variables over temp tables is that (because they don't use statistics) they help maintain a reusable execution plan. As a general rule, table variables are better for OLTP stored procedures where only a small (may 100 or less) rows are returned. However, for more robust stored procedures, something like the data source for a report, that will process thousands or millions of rows, which sounds more like what you're working with at the moment, temp variables are better SQL Server calculates actual statistics which improves performance of joins.

    Also, it could be useful for some situations if table variable row count estimate were a server level setting, rather than SQL Server assigning an arbitrary value like 1 or 100. That way the DBA could throttle it up or down as appropriate in a manner similar to the cost threshold for parallelism setting. However, as far as I know there is no server setting or trace flag for this currently.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eirikur Eiriksson - Friday, March 3, 2017 6:19 AM

    John Mitchell-245523 - Friday, March 3, 2017 5:59 AM

    Yeah, I get the same on 2014 Developer.  And I can't find any authoritative reference for the 100 row thing, either.  Hmmmm.

    John

    No server wide traceflags in the mix?
    😎

    No, none.

  • John Mitchell-245523 - Friday, March 3, 2017 7:07 AM

    Eirikur Eiriksson - Friday, March 3, 2017 6:19 AM

    John Mitchell-245523 - Friday, March 3, 2017 5:59 AM

    Yeah, I get the same on 2014 Developer.  And I can't find any authoritative reference for the 100 row thing, either.  Hmmmm.

    John

    No server wide traceflags in the mix?
    😎

    No, none.

    Had some time to kill so I've tried any possible way of using a table variable and the CE on 120 compat is always returning 1 for the estimation.
    😎

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply