Local Temporary Tables and Table Variables

  • GilaMonster (9/13/2008)


    Table variables has the same scope as normal variables. Much less than a temp table. Their lifetime is limited to one set of commands (batch) sent to the server.

    So, if you send a bunch of ad-hoc SQL to the server (using Execute or ExecuteNoQuery or the like) and that ad-hoc SQL declares and uses a table variable, the table variable only exists for the duration of that piece of ad-hoc SQL.

    Are you running SQL Enterprise or Standard?

    Have you checked to see if the page file usage is coming from SQL or from something else on the box?

    Lastly (not a perf issue) RC0 isn't licensed for production usage (unless you're a member of the early-adopter program) and has a built-in expiry date. You should replace that with SQL 2008 RTM as soon as you can.

    Thank you Gila

    Good, that's smaller scope. After noticing previous posts, we do also observe high IO and CPU as well

    so my next step is perhaps change all TABLE VARIABLE -> TEMP TABLE, and see if similiar pattern exists

    I checked (sp_configure, Process Explorere). SQL has max size of 3GB yes, but the VirtualSize is 10GB so it is from sqlserv.exe

    It's a dedicated SQL Server, nothing else runs on it

    It's SQL 2008 Standard (yes RC0 evaluation), I wish we can upgrade already 🙁

    Added:

    actually now I recall, we are in the early-adopter program, so we may be safe there

    and Pagefile usage was 8GB last night -> 5GB near midnight -> 10GB this morning -> 8.7GB noon -> 12GB now

    btw, we use Spatial index and Full-text Daemon on the server

    SQLServerNewbieMCITP: Database Administrator SQL Server 2005
  • Ummm... high IO and CPU isn't necessarily caused by either Table Variables or Temp Tables. It's HOW they are used. I think what you're going to find is that the shift to Temp Tables may help a bit, but the real key is, how much RBAR and aborant joins you're working with in the code.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • jerryhung (9/13/2008)


    Good, that's smaller scope. After noticing previous posts, we do also observe high IO and CPU as well

    so my next step is perhaps change all TABLE VARIABLE -> TEMP TABLE, and see if similiar pattern exists

    You can try. It may help, it may not. Depends on what's causing the problem in the first place.

    Can you post of of the queries that seems to be giving problems (perhaps start a separate thread for that?)

    I checked (sp_configure, Process Explorere). SQL has max size of 3GB yes, but the VirtualSize is 10GB so it is from sqlserv.exe

    It's a dedicated SQL Server, nothing else runs on it

    SQL's max memory is 3GB, but it's using 10? You may want to raise that with MS using your early adopter contacts. SQL does and will use a bit more memory than what the max memory says, but not 7 GB more.

    What do the target and total server memory counters return (perfmon, SQL Server:memory manager)

    What does the following return?

    select SUM(single_pages_kb + multi_pages_kb)/1024.0

    FROM sys.dm_os_memory_clerks

    It's SQL 2008 Standard (yes RC0 evaluation), I wish we can upgrade already 🙁

    You should look into upgrading to RTM. I don't know when RC0 is going to expire and I don't think you want to find out when your prod server refuses to start up.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (9/14/2008)


    jerryhung (9/13/2008)


    Good, that's smaller scope. After noticing previous posts, we do also observe high IO and CPU as well

    so my next step is perhaps change all TABLE VARIABLE -> TEMP TABLE, and see if similiar pattern exists

    You can try. It may help, it may not. Depends on what's causing the problem in the first place.

    Can you post of of the queries that seems to be giving problems (perhaps start a separate thread for that?)

    I checked (sp_configure, Process Explorere). SQL has max size of 3GB yes, but the VirtualSize is 10GB so it is from sqlserv.exe

    It's a dedicated SQL Server, nothing else runs on it

    SQL's max memory is 3GB, but it's using 10? You may want to raise that with MS using your early adopter contacts. SQL does and will use a bit more memory than what the max memory says, but not 7 GB more.

    What do the target and total server memory counters return (perfmon, SQL Server:memory manager)

    What does the following return?

    select SUM(single_pages_kb + multi_pages_kb)/1024.0

    FROM sys.dm_os_memory_clerks

    It's SQL 2008 Standard (yes RC0 evaluation), I wish we can upgrade already 🙁

    You should look into upgrading to RTM. I don't know when RC0 is going to expire and I don't think you want to find out when your prod server refuses to start up.

    I agree, I had the fear of one day, SQL Server would just stop working (in house, and production)

    But at the moment, I *think* we may have found the issue - Page File going nuts with a iFTS Catalog. RC0 x64 on Windows 2003 x64 as well

    So I disabled the Full-Text Automatic Change Tracking, and guess what, Pagefile has stayed at 3.5-3.7GB for 12 hours

    Now we just need to prove the theory

    SQLServerNewbieMCITP: Database Administrator SQL Server 2005
  • Very good article.

    Pingback from

    http://madebysql.blogspot.com/[/url]

    Thanks

  • the temp table is created with nvarchar(50) in both screenshots, is this correct?

    "

    CREATE TABLE #TestUDTs (AddressTypeID int NOT NULL,

    [Name] nvarchar(50) NOT NULL PRIMARY KEY CLUSTERED(AddressTypeID))

    It will fail with the following error:

    Msg 2715, Level 16, State 7, Line 1

    Column, parameter, or variable #2: Cannot find data type dbo.Name.

    The work around is to use the native data type of the user defined data type:

    CREATE TABLE #TestUDTs (AddressTypeID int

    NOT NULL,

    [Name] nvarchar(50) NOT NULL PRIMARY KEY CLUSTERED(AddressTypeID))

    "

  • Great article!

    I stumbled across another difference some years ago now, while writing a Crystal Report with the data source being a stored procedure that used a temporary table - Crystal totally fails in this scenario.

    The underlying issue here is that if you "SET FMTONLY ON" and call a stored procedure that uses a temporary table, the call will fail (Invalid object name '#temp1234' under SQL2008, message may vary on earlier versions). This is what Crystal does in order to determine the expected schema of the output.

    Using a table variable works around this issue nicely.

    Cheers,

    Nick

  • Thank you very much, Peter! No matter how many times someone discusses table variables vs. temp tables, it seems as though there are some more wrinkles to explore.

    Two things you might want to address (only the first one is really germane to the topic):

    1) Talk about indexing. With a temp table, you can add any kind of index you want. With a table variable, you are limited to the index on the PK that you declare for the table.

    2) In IE 6, the "Appendix" HTML table is not shown completely unless the browser window is enlarged to something in excess of about 1280 pixels.

    Curt

  • Nice artical. I was quite surprised that performance was not addressed.

  • Thanks everyone for your comments.

    To david.hultin 15509:

    Yes, it was a copy/paste problem. It was reported by, interestingly, another David (David Quéméré

    ) when it was first published in 2008 (check comments on Page1)

    To ccoker-1050064:

    Thansk for mention the I.E. 6 issue.

    Indexes on temp table are well discussed. Since the article is intended to focus on something not covered well by other writers, it was just mentioned briefly in the appendix DDL part.

    To Ken Shapley:

    Again, the article is intended to focus on something not covered well by other writers and performance aspect is well discussed by other writers, and it is mostly related/caused by the lack of index/statistics of table variables.

  • I've read a some articles on Table variable vs Temp Table performance and I have found them lacking good explaination. This one left me hanging at the end of the article http://www.sql-server-performance.com/articles/per/temp_tables_vs_variables_p1.aspx

    Table 2: Using SQL Server 2005 (time in ms).

    In some cases, SQL 2005 was much faster then SQL 2000 (marked with green). But in many cases, especially with huge amounts of data, procedures that used table variables took much longer (highlighted with red). In four cases, I even gave up waiting.

    Why?

  • Peter, Nice article.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Ken Shapley (1/15/2010)


    I've read a some articles on Table variable vs Temp Table performance and I have found them lacking good explaination. This one left me hanging at the end of the article http://www.sql-server-performance.com/articles/per/temp_tables_vs_variables_p1.aspx

    Table 2: Using SQL Server 2005 (time in ms).

    In some cases, SQL 2005 was much faster then SQL 2000 (marked with green). But in many cases, especially with huge amounts of data, procedures that used table variables took much longer (highlighted with red). In four cases, I even gave up waiting.

    [font="Arial Black"]Why?[/font]

    IMHO, two reasons... 1) Stats will not and cannot be created against a table variable by StatMan. 2) Since the content of the table variable is not realized until run time (like a Temp Table) and they don't cause a recompile (which a temp table might), table variables are always evaluated as having only one row by the optimizer which also means that the optimizer may not (and frequently does not) make the best choice when many rows are present in the table variable.

    As a side bar, they also make life a bit difficult when troubleshooting code because their scope is not only session sensitive but also run sensitive just like any variable is. With temp tables, you can run the code that populates it once and do continuous trouble shooting against it. With a table variable, you have to run the code that populates the table variable every time which may (usually :-D) require a bit of juggling in the code with commented out code.

    There are places where table variables excel compared to temp tables (like where you don't want a rollback to affect the content) but, for the most part, the only reason why I'd ever use a table variable is because you can't use/reference temp tables in a function.

    I also avoid ROLLBACKs like the plague (overcoming the "best" reason to use a table variable) because ROLLBACKs are comparatively VERY expensive... that's why you'll also not see much in the line of TRY/CATCH in my code... I make sure the "answer" is known and won't even start a transaction unless I'm absolutely sure (read that as "the code has prevalidated all data") that everything will successfully fly in the transaction.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • 3) The only indexes that can be created against table variables are primary key and unique constraints (and even then, people don't usually bother)

    It's not a problem with 10 rows in the table (but then neither's the lack of stats). When you start putting several thousand rows in, joining the table variable, filtering on the table variable, etc, it becomes rather noticeable rather quickly

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (1/16/2010)


    3) The only indexes that can be created against table variables are primary key and unique constraints (and even then, people don't usually bother)

    It's not a problem with 10 rows in the temp table (but then neither's the lack of stats). ...

    Gail did you mean table variable here?

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

Viewing 15 posts - 16 through 30 (of 34 total)

You must be logged in to reply to this topic. Login to reply