Use Dynamic SQL to Improve Query Performance

  • James,

    At the risk of sounding snotty, you handicapped Case 2 intentionally to remove it from consideration. Neither of the other cases will return the same RESULTS as case 2 will. Try it. Case 3 will return nothing because LastName and Email are Null. Case 4 will return nothing as @Email is NULL. Only Case 2 properly represents a FirstName search. Therefore it is the only correct case.

    Look at the number of coded cases you will need for the approach in case 3...

    1 input parameter = 2 cases (with/without null)

    2 input parameters = 4 cases

    3 input parameters = 8 cases

    4 input parameters = 16 cases

    5 input parameters = 32 cases

    6 input parameters = 64 cases

    7 input parameters = 128 cases

    8 input parameters = 256 cases

    Now, if you need a true dynamic query the ONLY option that has performance across all input parameters on a million row test table is Dynamic SQL. Otherwise at 8 input parameters you are trying to maintain 256 cases which is not only impractical, it's virtually impossible.

    Edit: I didn't realize this article/thread was over 2 years old... I don't expect a response given how old it is.

    James Dingle-651585 (5/26/2010)

    If OBJECT_ID('Case3GetUser', 'P') Is Not Null Drop Procedure Case3GetUser;


    Create Procedure Case3GetUser (

    @LastNameVarChar(50) = Null,

    @FirstNameVarChar(50) = Null,

    @EmailVarChar(255) = Null



    RaisError('Case3GetUser', 10, 1) With NoWait;





    -- Last name only (omitted but implicit: "And @LastName Is Not Null")

    UserId In (Select UserId From [User] Where @LastName = LastName And @FirstName Is Null)

    -- Last name and first name together (omitted but implicit "And LastName Is Not Null And @FirstName Is Not Null")

    Or UserId In (Select UserId From [User] Where @LastName = LastName And @FirstName = FirstName)

    -- by email (omitted but implicit "And @Email Is Not Null")

    Or UserId In (Select UserId From [User] Where @Email = Email)



    Looking at the execution plans, we observe that:

    1. Method 2 have a unique execution plan per combination of parameters, while methods 3 and 4 always have the same one.

    2. Because methods 3 & 4 have each only ONE execution plan, it's easy to review. For method 2, you have to test all the combinations of parameters and look at each execution plan generated.

    3. The only terrible SQL statement and execution plan come from method 2. It is an illustration of an unexpected code path leading to the execution of something that has not been reviewed nor predicted.

    Looking at the I/O statistics you will observe that:

    1. All methods give the same performance when running in the use case they were designed for.

    3. In the bogus situation, method 2 is the only one to turn into a full scan, while others do not put the server in danger.

  • Brett Robson (5/20/2010)

    I don't see why you guys are promotiing the use of dynamic SQL. I'm often asked in job interviews when it's appropriate to use dynamic SQL - the correct answer is never.

    I have never (and will never) see dynamic SQL out perform a properly coded & compiled stored procedure. It's impossible - NOTHING gets cached.

    Sorry, Brett, the correct answer is "it depends", like Gianluca and others rightly pointed out.

    If it weren't for company confidentiality, I would love to post the SP that generates dynamic SQL comprising nested CTEs to solve formulae stored as SQL strings in the database that consume further formulae and/or aggregated raw data and so on, involving differing aggregate operations, AVG()/SUM()/COUNT etc. at each level, accommodating differing periodicity of inputs, requiring pre-aggregation or not, converting signal counts to digital readings or not, drawing from cached results or raw data and more - 'nuff said as to complexity? And then issue a T-SQL Challenge to come up with a solution that completely obviates dynamic SQL... Under SQL 2005 one of our formulas expands to the point where over 256 tables are involved in the query and it gets rejected upon exec... NEVER is a word that is blown out the door by this core process... And the typical dynamic SQL runs in around 0.5 seconds on multiple tens of thousands of rows drawn from a base table comprising 2,104,604,629 rows - GO DYNAMIC SQL!

    If you run Kimberley Tripp's revealing cache query at Plan cache and optimizing for adhoc workloads you'll see that IF you run dynamic SQL, you CAN see plan reuse. Even the monster described above achieved plan-reuse.

    Be careful that you look at every problem with an open mind...;-)

Viewing 2 posts - 106 through 107 (of 107 total)

You must be logged in to reply to this topic. Login to reply