September 30, 2008 at 12:33 pm
Article in question: http://www.sql-server-performance.com/articles/per/temp_tables_vs_variables_p1.aspx
I ask because I'm coming to a similar conclusion but my gut tells me that isn't correct. Is this a common issue with a sensible resolution?
Thanks for any thoughts...this is pretty frustrating.
September 30, 2008 at 12:59 pm
As far as I know, this article is correct (not entirely sure about #2, I think that may only be partialy true). Also, it misses another one: Temp Tables can have Indexes and Keys added to them, Table Variables only get the index associated with their primary key.
What part are you finding frustrating?
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
September 30, 2008 at 1:06 pm
Agreed - it looks to be fairly accurate. Considering he does try to run the stuff in 2005 at the end - he didn't mention that SQL 2005 can do statement-level recompiles, so some of those recompile penalties aren't nearly as bad as they were in 2000.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
September 30, 2008 at 1:11 pm
Thank you for the responses.
The frustration point is that we apparently need to re-write ~100 stored procs to no longer make use of the UDFs that ran perfectly fine under SQL Server 2000. The act of re-writing is only a few hours of time, but it could cause our current (and some previous) releases to be entirely re-validated (not a few hours of time). Thanks again for the confirmations.
September 30, 2008 at 1:14 pm
The article is basically correct, but misses a few key points.
First, the recompile cost in SQL 2005 compared to SQL 2000, is vastly reduced. SQL 2005 can do statement-level recompiles, which makes them much less expensive.
Second, the "precompilation" of table variables is correct, but assumes they will always have exactly 1 row in them. So, yes, they are precompiled (which merely means they have a relatively static execution plan), but in this case, that's more often a bad thing than a good thing. A join on a table variable will almost always end up as a nested loop. Just try that and see how well it goes on a million rows!
Third, it mentions transactions, but doesn't give any data on why this is a good/bad thing. For example, an advantage to the lack of rollback on table variables is that you can insert into them, update them, get an error, skip to a Catch block with a rollback command, and still capture the data in the table variable into an error log. The rollback would, in most scenarios, dump the data out of a temp table. On the other hand, I've seen error-handling that relied on rollback end up with procs returning completely incorrect data, because the contents of variables, including table variables, wasn't cleaned up by the rollback. Managed correctly, this can be a good thing, but you have to be really, really certain of your code.
I routinely test code with both table variables and temp tables. I've found that, any time the number of rows will go over a few hundred, temp tables are pretty much a better bet. Not absolute, but pretty close to it.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
September 30, 2008 at 1:15 pm
Why do you need to rewrite?
Are they suddenly performing badly? Are they giving incorrect results?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 30, 2008 at 1:27 pm
Generally speaking, UDFs work just about the same in 2005 as they did in 2000. I have several that I didn't change a line of code in, and they work just fine.
Are you running into a specific problem with those UDFs, or are you just doing "shotgun refactoring" on them?
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
September 30, 2008 at 1:32 pm
Clay.Angelly (9/30/2008)
@GSquared: Thank you, great info there.@GilaMonster: Yes, the particular example we're focusing on anyway...has gone from ~50ms execution times to ~1.5 minutes since the migration to SS2005.
You might care to check your indexing before trying to blow through a lot of rewriting. I did find that the new optimizer tends to use different indexes than the "old" optimizer, and sometimes likes them laid out a little differently.
You may find that a little TLC on indexing (like - rebuilding them all or taking a look at the missing indexes DMV) may save you a lot of rewriting.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
September 30, 2008 at 1:40 pm
Clay.Angelly (9/30/2008)
@GilaMonster
: Yes, the particular example we're focusing on anyway...has gone from ~50ms execution times to ~1.5 minutes since the migration to SS2005.
Ah. Could you post that example here, along with details of the tables involved and indexe on those tables. There are a couple of people around here who enjoy rewriting queries who might be willing to help.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 30, 2008 at 1:46 pm
Yes, it's a specific problem with some of the UDFs that return a few hundred rows or more. We have rebuilt indexes, perfmon, even let the tuning wizard have it's say. 🙂 I think we'll shelve SQL Server 2005 for now and see if SQL Server 2008 handles this any differently.
Here is my equivalent post at the ASP.NET forums which gives some examples if anyone is still interested: http://forums.asp.net/t/1327651.aspx
Thanks all, I appreciate your time and info.
September 30, 2008 at 2:00 pm
I looked at the example that you have posted there. It occurs to me that if @Group_ID is unique in the table that your function returns, then you can add a Primary Key to the table in the function definition, which may solve your problem.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
September 30, 2008 at 2:07 pm
Clay.Angelly (9/30/2008)
I think we'll shelve SQL Server 2005 for now and see if SQL Server 2008 handles this any differently.
Looking at the code you have, not likely. 2008, in my experience, performs closer to 2005 than to 2000 in this area.
Multi-statement table valued functions do not perform well, especially on larger row sets. There's no real way around that. I would recommend that you change the functions into procedures and then insert into.. exec, and use the temp table.
See some tests I did - http://sqlinthewild.co.za/index.php/2008/08/12/views-or-functions/
Are all of those distincts necessary?
Regarding the count at the end, since all you're doing is seeing if a row exists, use the EXISTS statement. That way only 1 row will have to be read, not the entire table.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 30, 2008 at 2:30 pm
Adding a pk to @group_id didn't help. I'm not sure on the distincts, it's not my code. Thanks, may give your other idea a shot.
September 30, 2008 at 9:21 pm
Heh... I find it very difficult to believe an article on performance where the author says "At first, we prepare a test table with 1 million records:" and then builds the damned thing with a bloody loop. 😉
He did get one thing absolutely right, though...
"There is no universal rule for when and where to use temporary tables or table variables. Try them both and experiment. "
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply