Query Cost Relative to the Batch vs. statistics io vs. Extended Events

  • I'm working on a performance issue and when I compare the original logic with the my proposed new logic, everything points to the new version being faster and more efficient. Everything except the "query cost relative to the batch". I have the following (obfuscating throughout):

    1) Per statistics io -

    ORIGINAL:

    Table '#CoXXXXXXX___________________________________________________________________________________________________00000017C892'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'TABLE_NC'. Scan count 1, logical reads 392217, physical reads 217, read-ahead reads 261226, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'TABLE_NCL'. Scan count 1, logical reads 370982, physical reads 754, read-ahead reads 247381, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'TABLE_LN'. Scan count 1, logical reads 9168, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    NEW:

    Table 'TABLE_NCL'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'TABLE_NC'. Scan count 3, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'TABLE_NCL'. Scan count 0, logical reads 4, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'TABLE_LN'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table '#TEMP_NN___________________________________________________________________________________________________________00000017C8A9'. Scan count 1, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    2) Per rudimentary datediff in milliseconds in SSMS window -

    ORIGINAL: Average milliseconds = 9535
    NEW: Average milliseconds = 272

    3) Per extended events -

    ORIGINAL:

    Average cpu_time = 8039000
    Average logical_reads = 772367
    Average physical_reads = 465899
    Average duration = 8854146

    NEW:

    Average cpu_time = 27500
    Average logical_reads = 6423
    Average physical_reads = 1
    Average duration = 59162

    4) Per execution plan query cost relative to the batch (attached and extensively obfuscated) -

    ORIGINAL: 12%
    NEW: 87%

    Should I be taking the "query cost relative to the batch" with a grain of salt? What is getting 87% is isolating records in a temp table (4 columns - 3 are INT, one is CHAR(3) and < 100 rows are inserted in this table) in the new version of the code, but it actually screams. If there is a discrepancy between duration and query cost like I have above, should one rely on duration??

    The reason I am looking at this process is it's occasionally becoming a deadlock victim. This is part of a job that runs regularly throughout the day so if it becomes a deadlock victim, it will catch up when it runs the next time.

    Thank you!

  • In short, yes, ignore "query cost relative to batch".  There's no question that the new code is vastly more efficient.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • +1000  Second that!

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thank you Jeff and Scott.

    That said, is there ever any case or reason why one would put weight into the "cost relative to the batch"?  I know that's awfully broad, and likely a case of "it depends", but just trying to understand if/when there is any value in it.

  • SoCal_DBD - Thursday, February 23, 2017 1:39 PM

    Thank you Jeff and Scott.

    That said, is there ever any case or reason why one would put weight into the "cost relative to the batch"?  I know that's awfully broad, and likely a case of "it depends", but just trying to understand if/when there is any value in it.

    It's normally a pretty good hint of where you need to look in large stored procedures.  People forget, though, that it's an estimate based only on what SQL Server can see.  If you, for example, race a cross join of sys.all_columns to generate a million integers against an rCTE (Recursive CTE) designed to do the same, the rCTE will show a percent of batch of very close to zero and the cross join will be very close to 100%.  In reality, the near opposite is true for duration, CPU usage, and resource usage.  The reason why it happens is because SQL Server can really only estimate the first iteration of the rCTE.

    That's the key... consider percent of batch to be a hint.  It's not an authority anymore than the missing index hints.  In order to make the final decision, you need to do exactly what you did and run a measured test making sure the test doesn't change characteristics.  Like a much wiser man said, "One good test is worth more than a thousand expert opinions" and then realize that the optimizer simply forms an opinion. 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • SoCal_DBD - Thursday, February 23, 2017 1:39 PM

    Thank you Jeff and Scott.

    That said, is there ever any case or reason why one would put weight into the "cost relative to the batch"?  I know that's awfully broad, and likely a case of "it depends", but just trying to understand if/when there is any value in it.

    The cost relative to the batch is just comparing the cost estimates of each statement within the batch. That's all. It's not itself a measure as much as it's a comparison of other measures. As a comparison, it's useful. As a number, it's meaningless without then comparing the other numbers.

    The most important thing to remember when dealing with execution plans is that the cost estimates, regardless of an actual plan or an estimated plan, are always estimated values. They are never real values. While they internally reflect mathematical models that simulate I/O and CPU, they are not measures of I/O and CPU. All that said, they are the numbers we get, so we're going to use them.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply