• Jonathan AC Roberts (5/23/2010)


    James Dingle,

    In fact your query preforms nearly twice as badly as the original non-dynamic SQL that would have been in the initial stored procedure.

    [...]

    See SQL and execution plan below.

    Execution plans:

    Query 1 uses 35% of total cost and query 2 uses 65% of total cost.

    Hi Jon,

    Quick remarks:

    - There is a clustered index scan on the ID. Obviously, this way it won't work. The snippet of code I've wrote was assuming that ProductID is the clustered unique index (usually the PK) of the table.

    - You have to add the indexes on the columns on which the filtering is done. In the execution plan above, obviously there is no index on "Name" and so on. So it is not possible to figure out how and why it works.

    - Don't trust too much percentages to define a query to be good or bad. Trust CPU time and # of reads. I never knew what those percentages were about exactly and by experience found that they help to focus attention on bottlenecks. But what really matters is bottom line CPU and IO.

    - My point was not really about performance, but more about what dynamic queries based on concatenated SQL gives in a real environment, within the lifetime of a project. I used it once, and with the experience of it and others techniques, I went away from dynamic SQL. As usal, in database world, "it depends". So if some people fell comfortable with this approach, let them use it. My point was to show that it is possible to rely on scalar variable resolution to make some part of the query to be executed or not, and thus triggering dynamic execution without putting some code within quotes.