Execution Plan reporting different row numbers depending upon maxdop settings

  • Hi,

    I am trying to understand the information returned through query execution plans and came across a strange situation.

    If I run my query using different OPTION MAXDOP(n) settings, the reported actual rows for the tables that I am accessing differs. I am aware that actual and estimated values can be different, due to statistics and so on.

    What I find strange is that the reported actual value for MAXDOP(1) for a clustered index scan is the same as the estimated value, whereas each MAXDOP setting above 1 supplies a different number (much lower in this case) than with MAXDOP(1).

    Is this because an index scan in parallel uses the statistics in a better way to be able to "skip" sections of an index whereas a serial scan does not benefit?

    p.s. I am looking at this on a SQL Server 2008 Enterprise Edition. Is this possibly a behaviour baked into the edition?

    Thanks,

    William

    Regards,

    WilliamD

  • Can you post the execution plan? It's kinda hard to talk about them in abstract.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi Gail,

    here are the execution plans.

    I think that I may have understood the reasoning behind the differences now.

    I was talking about the first two tables in the execution plan (from right to left).

    The parallel plan is creating a bitmap of ProjectShipment before working with the table BusinessObjectShipmentItems. I then saw that the clustered index scan of the shipment items was probing the bitmap to return results. Is this what is causing the number skew?

    Another point that I am not sure about, is the way that SQL Server decides on a "sane" maxdop setting. Is it using the table statistics to decide on a sensible number? If so, it seems to be nonsensical to be throwing all 16 cores at this table. Or is it more a case of "the more the merrier" in order to get the query done in time?

    (Just as another side note, the database resides entirely in RAM.)

    Thanks

    William

    Regards,

    WilliamD

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply