Query Analyzer 2000/Interpreting Plan

  • Hi!

    Query Analyzer of SQL Server 7 gave me quite "clear" results when using the "display execution plan" feature.

    With Query Analyzer of SQL Server 2000 this seems to have changed (at least for my limited understanding), especially considering the "subtree cost" and "cost %" values of the plan.

    Can anyone explain why "cost %" is sometimes stated as to be a multiple of 100 % and why "subtree cost" alwyas seems to be the same value scaled to a multiple of 1?

    My understanding of the plans displayed by Query Analyzer of SQL Server 7 was that "subtree cost" somehow took CPU and disc time into consideration, which just does not seem to be the case now.

    Anyway, I think that my understanding of the values displayed is quite vague right now (I could cope with SQL Server 7 provided values quite well instead), so if somebody could shed some light on that?

    Thanks in advance,

    C.

  • In working with MS Support and Tuning groups, they do not really look at the % costs. The reason is that these are so fluid and can change. The bigger thing that they look at is the amount of reads per line and trying to reduce these. The

    set statistics io on

    set statistics profile on

    are useful in looking at reads and rows returned. Be sure that you are properly indexing to make these as low as possible.

    Steve Jones

    sjones@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/sjones

  • You really are only looking at the execution plan for processes that can hurt performance such as Table Scans. The other information is more statistical.

  • Sub tree cost is a figure that is mainly used by the query engine to determine when to kick adhoc sql plans out of the cache. It is an indication of the cost of compilation and cost of the query.

    As the others have said this should only be used as one input, a more important input is the amount of data read. You need to have an understanding of your data because a query might be very cheap and oly do a few reads when you specifiy one parameter but changing that parameter might result in loads and loads of reads. i.e employee manager hierachy. It is possible that one manager only has 2 employees, another might have 300. So you need to run the query for both extremes. This is especially the case with sub queries, where they are being evaluated on a row by row basis.

    The final thing is to test against a fully populated data, testing on a dev box with 1000 records is almost worthless when the system gets to 1000000 records.

    A bit of a tangent but what the hell

    Simon Sabin

    Co-author of SQL Server 2000 XML Distilled

    http://www.amazon.co.uk/exec/obidos/ASIN/1904347088


    Simon Sabin
    SQL Server MVP

    http://sqlblogcasts.com/blogs/simons

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply