Best way to get execution plans?

  • Hello,

    I have a database used for reports that has a bunch of scalar functions on it that generate assorted record counts etc. Scalar functions we know are inefficient but suit our purposes as speed is not the most important thing on this particular server. Scalar functions allow for easy code re-use so we can guarantee that whatever report is calling the function they will all get the same counts/results without faffing about too much.

    That being said I think that when the function is being called from within a stored procedure it is using a different execution plan from what is used if we call the function directly from within SSMS. By way of an example if I call funGWPLexp (gross written premium with legal expenses - it's an insurance report) it will take 251 seconds from within a humongous stored procedure (at least the activity monitor on ssms express 2k8 recons the average is 251000ms) but takes less than 2 seconds to execute when called manually.

    I can get the execution plan from the sproc easily because it comes up as a recent expensive query but that's not the case when it only takes 2 seconds to run.

    What I'm hoping to do is find out what execution plan runs when I call the function from ssms directly and manually hint that index in the function - unless anyone has a less a drastic suggestion...

    Is there an easy way to grab the execution plan? If I click 'display actual execution plan' I get the plan for calling the function and not the plan the function itself is using;

    SELECT 0% <--- Compute Scalar 8% <--- Constant Scan 92%

    I'm sure there's got to be a simple way to do it - do I need to turn to profiler for this?

    Thanks

    Ben

    Ben

    ^ Thats me!

    ----------------------------------------
    01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
    ----------------------------------------

  • Profiler.

    And index hints should be your last resort, not your first idea.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Lol thanks. Once I've found an execution plan that works nicely I'll do some research on convincing the sprocs to use the right one and will probably end up asking some more questions 🙂

    Cheers

    Ben

    Ben

    ^ Thats me!

    ----------------------------------------
    01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
    ----------------------------------------

  • BenWard (12/17/2012)


    Hello,

    I have a database used for reports that has a bunch of scalar functions on it that generate assorted record counts etc. Scalar functions we know are inefficient but suit our purposes as speed is not the most important thing on this particular server. Scalar functions allow for easy code re-use so we can guarantee that whatever report is calling the function they will all get the same counts/results without faffing about too much.

    Not trying to beat you up, but that is still no excuse to use a Scalar function. If not on this database since you said performance is not important (how rare is that :-D) this technique for using an Inline Table-valued function instead of a logically equivalent Scalar-valued function may come in handy for you down the line. All it requires is a little shift in thinking on how you write your functions and the queries that use them:

    Inline Scalar Functions by Itzik Ben-Gan

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • Thanks for the reference it'll be useful in the future.

    Although I don't think Scalar vs ITVF is really a big issue for me as I'm not querying the scalar functions in line with tables.

    It's more along the lines of;

    ....

    DECLARE @fSomeSensiblyNameVariable1 AS FLOAT

    DECLARE @fSomeSensiblyNameVariable2 AS FLOAT

    DECLARE @fSomeSensiblyNameVariable3 AS FLOAT

    DECLARE @fSomeSensiblyNameVariable4 AS FLOAT

    DECLARE @fSomeSensiblyNameVariableTotal AS FLOAT

    SELECT @fSomeSensiblyNameVariable1 = dbo.funSalesVolume(@dTransactionDate, @cProduct, 'Dave')

    SELECT @fSomeSensiblyNameVariable2 = dbo.funSalesVolume(@dTransactionDate, @cProduct, 'Steve')

    SELECT @fSomeSensiblyNameVariable3 = dbo.funSalesVolume(@dTransactionDate, @cProduct, 'Nigel')

    SELECT @fSomeSensiblyNameVariable4 = dbo.funSalesVolume(@dTransactionDate, @cProduct, 'Sammie')

    SELECT @fSomeSensiblyNameVariableTotal = @fSomeSensiblyNameVariable1 + @fSomeSensiblyNameVariable2 + @fSomeSensiblyNameVariable3 + @fSomeSensiblyNameVariable4

    INSERT INTO tblResults

    SELECT @fSomeSensiblyNameVariable1 , @fSomeSensiblyNameVariable2 , @fSomeSensiblyNameVariable3 , @fSomeSensiblyNameVariable4 , @fSomeSensiblyNameVariableTotal

    I know this looks convoluted but when 1 report is a PDF 50 pages long with an average of 50 values per page you're looking at 2000 or so distinct metrics (provided by some 30 scalar functions) so this logical code structure allows for uncomplicated maintenance. Although I admit 14000 lines is a lot for any sproc! The metrics that needs to go in this report seem to change quarterly so the more clearly segmented and structured the code the better.

    The PDF is then generated by sticking headings down the left of the page and then looping through the tblResults table.

    As with anything like this it's down to developer preference and I'm sure if I presented my requirements to 10 of my peers I would get 10 different solutions but this appeals to my ploddy methodical brain.

    on a database with several million rows in each table queried this report usually completes (all 2000 function calls x 13 months of data per report) in about 6 minutes so I know it can work efficiently.

    The next step, now that I have all the execution plans stored from my profiler trace, is to find out how to make the stored procedure use the right plan!

    Ben

    ^ Thats me!

    ----------------------------------------
    01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
    ----------------------------------------

  • No worries. You're doing fine with the use of Scalar UDFs. The way you are using them is actually the only way I support their use. I just do not see them implemented that way all that often so I assumed the worst, sorry about that. I usually see them being abused in SELECT-column-lists, JOIN predicates and WHERE-clauses wrapping columns. You're team seems to have stayed away from that mess but has really taken the ball and run away with it, albeit in a different direction. I strongly suspect you could best that 6m mark if you went set-based, but I hear what you're saying about maintainability.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • ah cool. Good to know I'm not abusing the udf 🙂

    Although I will admit to using them in 1 or 2 insert into select constructs but with no more than a couple of hundred rows being bashed at a time.

    I was brought up on QBASIC and ASM with more than a little C++ thrown into the mix (my dad gave me my first programming lesson when I was about 5 years old lol) so for every line of code I write in any language I make it my business to find out what the cpu is actually doing when I give it a command!

    now I just need to get my head around OPTION(USE PLAN '...')

    Ben

    ^ Thats me!

    ----------------------------------------
    01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
    ----------------------------------------

  • BenWard (12/21/2012)


    ah cool. Good to know I'm not abusing the udf 🙂

    well...2000 function calls x 13 months of data per report, while not the typical abuse-pattern, in some circles it would certainly qualify. I am not saying whether I am in that circle or not :Whistling: Like I said, if 6m satisfies the biz req and the SQL Server otherwise meets its SLA for all other customers then the rest are inane details.

    I would be interested to hear how the USE PLAN research goes. I promise I will not beat you up about UDFs any more than I may already have.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • Hi Ben - I think you're after the following;

    SELECT [cx].[refcounts]

    , [cx].[usecounts]

    , [cx].[objtype]

    , [st].[dbid]

    , [st].[objectid]

    , [st].[text]

    , [qp].[query_plan]

    FROM sys.dm_exec_cached_plans cx

    CROSS APPLY sys.dm_exec_sql_text( cp.plan_handle ) st

    CROSS APPLY sys.dm_exec_query_plan( cp.plan_handle ) qp

    The statement returns the xml for all execution plans executed on the database using a bunch of DMV's

    --------------------------------------------

    Laughing in the face of contention...

  • Thanks 🙂

    I'll be working on this stuff a bit more over the next couple of weeks once I've got my currently development version in live.

    Ben

    ^ Thats me!

    ----------------------------------------
    01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
    ----------------------------------------

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply