May 20, 2010 at 9:54 am
tigriswoods (5/20/2010)
It's true I'm just past novice with SQL SERVER, and this article interests me a great deal, because we have many queries that are used to filter results back to the user.My question is a simple one, and I noticed it because our test for a NULL parameter comes first, whereas yours comes second.
ie. we
select * from tableX X where
(@demo is NULL or (X.deme = @demo)
AND (@enabled is NULL or (X.enabled = @enabled)
whereas you have the reverse:
select * from tableX X where
(X.demo = @demo or @demo is NULL)
AND (X.enabled = @enabled or @enabled is NULL)
My question is this: does the order within the query matter? ie. does SQL Server work from left to right? and if so, would putting the parameter test "first" in the query matter ie, avoiding the second half of the condition entirely?
Thanks!
Nope. Conditions are evaluated in any order the query processor finds appropriate. There's no first and no second: every condition must be considered as if the whole expression was evaluated "all at once".
-- Gianluca Sartori
May 20, 2010 at 9:55 am
tigriswoods (5/20/2010)
My question is this: does the order within the query matter? ie. does SQL Server work from left to right? and if so, would putting the parameter test "first" in the query matter ie, avoiding the second half of the condition entirely?
No, the written order does not generally matter (there are exceptions such as the order of CASE expression evaluation). The general rule is that SQL Server is free to change the order of evaluation (and many other things too) while optimising a query for execution.
The second half of your question is known as 'short-circuiting' and SQL Server does not do this as a rule.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 20, 2010 at 10:23 am
I have had a situation with a stored procedure with a lot of search parameters with many table joins and min/max date ranges for date fields that got to be a nightmare trying to stay away from dynamic SQL. I did not want to give the users access to the underlying tables so decided to use "WITH EXECUTE AS OWNER" for the procedure so I could still control permissions to only the procedure itself. This worked very well for my situation.
May 20, 2010 at 10:31 am
Atif Sheikh (5/20/2010)
Dynamic SQL is the last thing to do.
Sometimes Dynamic SQL is quite useful. There are alternatives to using case statements or dynamic sql (such as child procs with the different variations of the code to execute).
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
May 20, 2010 at 10:34 am
Gianluca Sartori (5/20/2010)
Brett Robson (5/20/2010)
I don't see why you guys are promotiing the use of dynamic SQL. I'm often asked in job interviews when it's appropriate to use dynamic SQL - the correct answer is never.I have never (and will never) see dynamic SQL out perform a properly coded & compiled stored procedure. It's impossible - NOTHING gets cached.
"Never" and "Always" hardly apply to the database world. "It depends" fits better.
Late to the conversation, but I find this to be sound advice when dealing with the database world. I too used to be adamant that dynamic sql had no place. There are cases for it though - so long as it is properly implemented and used.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
May 20, 2010 at 10:41 am
Jeff Gray (5/20/2010)
The author did a fine job discussing the different approaches. The article shows once again that never and always are both almost always never the correct answer.
I concur.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
May 20, 2010 at 1:49 pm
It appears that 2 problems are trying to be solved here:
#1 - How to deal with dynamic SQL.
#2 - How to apply the proper protection from SQL injection.
SP's are useful (efficient) for a combination of static SQL and solving #2.
Enter the table-value-function:
Wrap your table SELECTs in TVF's.
Remove all external CRUD permissions from table objects.
Create dynamic, parameterized SQL in your DAL against the TVF's, and SQL Server (2005+) will create optimize planes and reuse cached plans, and give you the best performance (specially when parameterized).
Unless you have defined your SP with RECOMPILE, you will run into a performance wall with large table/multi-join queries because the first "dynamic" SQL plan will be used from cache. (speaking from experience). Please try it.
The key here is solving the "dynamic problem" in the data access layer and not in the the database code.
Jack
May 20, 2010 at 4:59 pm
jack.erdey (5/20/2010)
Unless you have defined your SP with RECOMPILE, you will run into a performance wall with large table/multi-join queries because the first "dynamic" SQL plan will be used from cache. (speaking from experience). Please try it.
And how exactly is this different from plan caching in your TVF approach? A plan is a plan, and sometimes the plan is sub-optimal, no matter if you use dynamic SQL, stored procedures or your TVF approach.
May 20, 2010 at 5:07 pm
jack.erdey (5/20/2010)
Enter the table-value-function:
Wrap your table SELECTs in TVF's.
Remove all external CRUD permissions from table objects.
Create dynamic, parameterized SQL in your DAL against the TVF's, and SQL Server (2005+) will create optimize planes and reuse cached plans, and give you the best performance (specially when parameterized).
Do you really wrap basic SELECTs, meaning that there is no row-level permission, input parameters, joins, column exclusion, calculations or other logic behind it, to a single table into a TVF? If so, what's the point? Why not just grant SELECT permissions ot the table itself?
May 20, 2010 at 7:30 pm
Just wanted to say thanks! I spent the day breaking a big SP that handled 8 different filters into 8 SPs, eliminating all need to have NULL parameters. We have about 1.5 million records, and the sum total time to run is now about half of what it was previously. I will definitely keep this (don't use NULL parameters if at all possible) in mind for future SPs.
May 20, 2010 at 10:46 pm
CirquedeSQLeil (5/20/2010)
Atif Sheikh (5/20/2010)
Dynamic SQL is the last thing to do.Sometimes Dynamic SQL is quite useful. There are alternatives to using case statements or dynamic sql (such as child procs with the different variations of the code to execute).
100% agree. But I think that there are different Execution Plans when use "LIKE" operator and "=" operator. If we use LIKE operator, it always goes for Index Scan (As in the dynamic query in the article). But if we use the same Dynamic query with "=", it gives different execution plan (atleast on my PC with SQL Server 2005 Standard edition).
May 21, 2010 at 12:47 am
Jonathan,
I must say that I found your article to have some really good information in it. I guess there was only a couple comments to it that would concern me.
1) If I understand what you are saying it sounds like yuo have a task that you needed to complete. You tried a number of ways to do that, however yu found that Dynamic SQL was working better. This was not your first options and well you found that the Dynamic SQL worked well. Is it safe to say you recommend that it could be used with static stored proces are not cutting it.
Chris Shaw
May 21, 2010 at 5:31 am
chris.shaw (5/21/2010)
Jonathan,I must say that I found your article to have some really good information in it. I guess there was only a couple comments to it that would concern me.
1) If I understand what you are saying it sounds like yuo have a task that you needed to complete. You tried a number of ways to do that, however yu found that Dynamic SQL was working better. This was not your first options and well you found that the Dynamic SQL worked well. Is it safe to say you recommend that it could be used with static stored proces are not cutting it.
Chris Shaw
Chris,
The particular query was timing out as the table it was querying had gradually grown to about 20 million rows. I found out that it would be a potential solution by taking the static SQL that was timing out and rewriting it to take out the unnecessary predicates that tested where the parameters were passed in as null. There are several things to consider when using Dynamic SQL, procedures using it have a greater overhead they are concatenating strings when creating SQL and doing more internal procedure calls, but the overhead is micro-seconds so when compared to the execution time of the query (several seconds in this case) it becomes insignificant. It is also only useful if the SQL you are executing needs to be changed depending on the parameters, but it can be very useful for making certain types of procedure perform better and also increasing the versatility of procedures.
Jonathan
May 21, 2010 at 11:44 am
I feel the article is actually pretty poor. All the dbas know this kind of dynamic sql.
Someone suggested recompile and with (optimize) option and he is spot on!
I was able to improve performance on parameterized query with the optimize option.
Best
Clement
May 21, 2010 at 12:06 pm
shaytoder (5/20/2010)
my mistake,for some reason i thought this approach will use seeks and not scans,
but now i tried it again, and i see scans.
sorry...:blush:
That's been the common problem throughout this thread... lot's of claims... no coded proof of claims. I'm not sure who first said it but "One test is worth a thousand expert opinions." 🙂
Thanks for the test you did and the feedback. It's a good person who can admit that a mistake was made. Well done.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 61 through 75 (of 106 total)
You must be logged in to reply to this topic. Login to reply