• SoCal_DBD - Thursday, February 23, 2017 1:39 PM

    Thank you Jeff and Scott.

    That said, is there ever any case or reason why one would put weight into the "cost relative to the batch"?  I know that's awfully broad, and likely a case of "it depends", but just trying to understand if/when there is any value in it.

    It's normally a pretty good hint of where you need to look in large stored procedures.  People forget, though, that it's an estimate based only on what SQL Server can see.  If you, for example, race a cross join of sys.all_columns to generate a million integers against an rCTE (Recursive CTE) designed to do the same, the rCTE will show a percent of batch of very close to zero and the cross join will be very close to 100%.  In reality, the near opposite is true for duration, CPU usage, and resource usage.  The reason why it happens is because SQL Server can really only estimate the first iteration of the rCTE.

    That's the key... consider percent of batch to be a hint.  It's not an authority anymore than the missing index hints.  In order to make the final decision, you need to do exactly what you did and run a measured test making sure the test doesn't change characteristics.  Like a much wiser man said, "One good test is worth more than a thousand expert opinions" and then realize that the optimizer simply forms an opinion. 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)