Dynamic SQL is getting executed faster than in a Stored Procedure

  • I have a view (join on a bunch of tables); This view is put inside of stored procedure with params to pass in to the view.

    The interesting thing is: The stored procedure takes longer to run than running the view itself.

    I did some statistics on both of the runs, and the stored proc (the longer executing one) has a 98K logical read on a table, as opposed to when the view is run by itself, the same table has only 509 logical reads.

    My understanding is that the sproc (since it is compiled) should run faster that the view.

    Why is this not so?

    Thanks.

  • mymail.default (3/14/2011)


    I have a view (join on a bunch of tables); This view is put inside of stored procedure with params to pass in to the view.

    The interesting thing is: The stored procedure takes longer to run than running the view itself.

    I did some statistics on both of the runs, and the stored proc (the longer executing one) has a 98K logical read on a table, as opposed to when the view is run by itself, the same table has only 509 logical reads.

    My understanding is that the sproc (since it is compiled) should run faster that the view.

    Why is this not so?

    Please check both execution plans.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • I did, and that's why I put the statistics for IO from the exec plans.

    My question is why is it that the exec plan for a view is better than that in the stored proc.

    Shouldn't the system generate a better exec plan for a sproc than running the view itself.

  • As Paul mentioned check the execution plans. If you could post the view and proc definition that would help as well.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • mymail.default (3/14/2011)


    I did, and that's why I put the statistics for IO from the exec plans.

    My question is why is it that the exec plan for a view is better than that in the stored proc.

    Shouldn't the system generate a better exec plan for a sproc than running the view itself.

    Depends on complexity, and what the view is doing. Also, are you in standard or enterprise versions? I'm wondering if it's expanding on its own or not.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • mymail.default (3/14/2011)


    I did, and that's why I put the statistics for IO from the exec plans.

    Same execution plan against same tables in same database can't show such a different I/O behavior, check execution plans please.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • I've attached the plans for both: View and Sproc

    Thanks.

  • Based on the plans u can see they both have different plans.

    The question still though is how come the view has a better plan than that of stored proc. Shouldn't it be the other way around?

    thanks...

  • I tried to open those files, but couldn't get either to open.

    It sounds like, based on the description, parameter sniffing. For identical parameters values, are the plans different? I assume you're running the query against the view directly within TSQL and then calling the procedure? If all that's correct, sounds like a classic case.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • These have different predicates in the index scan on TurningPointSP3.dbo.CustomerInvoice

    View: InvoiceCode, Type, and BillToCode against an implicit conversion between varchar() and nvarchar()

    This restricts the actual # of rows here to 11.

    Sproc only uses InvoiceCode and Type here, giving you an actual of around 32k rows.

    This is due to the parameterization and hard coding of the value vs. the parameter, and probably the selectivity of CUST-042319. Classic Parameter Sniffing problem.

    The key to noticing the difference quickly here is 1) the larger arrows in the proc plan then the view plan and 2) the filter connection where things go back to the same includes counter and billto in 'bad' plan (the sproc).

    Side note, the whole ISNULL thing really beats up performance. You'll feel some pain there regardless.

    Check out this blog by Gail Shaw on catchall queries:

    http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/

    Regarding parameter sniffing, check out this short series also by Gail Shaw:

    http://sqlinthewild.co.za/index.php/2007/11/27/parameter-sniffing/

    http://sqlinthewild.co.za/index.php/2008/02/25/parameter-sniffing-pt-2/


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • I was giving some time for the experts to chime in since I do not have a definitive answer for you...but since you reposted I'll jump in...

    Your WHERE clause tolerates NULLs in such a way where you can get seriously different execution plans depending on your inputs. This scenario sounds like parameter sniffing might be at play here. The first time you execute the proc after it is compiled will generate and cache a query plan and that plan will likely be re-used for different inputs. In most cases this is a good thing and why procs provide better performance than ad hoc SQL in the large. That said, if the first set of inputs generated a query plan that is not suitable for the next set then you can see some poor performance.

    In a previous life, on SQL 2000, I solved an issue like this by splitting the query into multiple queries, all with the same SELECT-column-list, but with each permutation of the WHERE clause represented as a separate query and found using a very large set IF...ELSE structure.

    SQL Server has gotten better at dealing with your type of WHERE clause since SQL 2000 but it can still be fooled. Try adding a WITH RECOMPILE to the stored proc to see if you get better performance over the course of multiple calls to the procedure with different sets of parameters. If not, or if that option is not too attractive, try using IF...ELSE to control the execution flow and split your query into multiples so your WHERE clauses do not need to contain ISNULL.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • To add to OPC's recommendations, a mediocre workaround that requires no dynamic SQL or code/logic reusage is to use the OPTIMIZE FOR structure. Be warned, it's fraught with peril, but can get the job done for items like this.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply