Execution Plan Question

  • I am troubleshooting a query. I know exactly what table is giving me the problem. This is a one-to-many LEFT OUTER JOIN. With one being in the FROM clause and the many being on the left joined table. The query worked fine in SQL 2008 but now is having fits trying to pull from the "many" table. I've had it running 16+ hours in one non-prod environment with the same spec hardware / software as our new production servers have.

    When I comment out both the column & join of the "many" table, the query works just fine. When I don't use the column but still join to the table, it adds it to the Execution Plan with a Key Lookup in which the Estimated Number of Executions is 287.0300006. That looks very high to me.

    The Estimated Number of Rows is 1 and the Actual Number of Rows is 153601. So far the only post / blog / thread I've found discussing Executions is one from Gail[/url] that explains the situation I'm seeing with a different object. I don't see any troubleshooting advice there.

    I know if I can cut down the Number of Executions, it will get the Number of Rows values closer together, but I'm unsure if the reverse is true. If I concentrate on solving the row count issue will it cut down the executions? Any advice or links on how to fix a high Number of Executions?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • whenever the difference between estimated rows and actual rows is off by a lot(order of magnitude or more,

    it usually means the statistics are out of date enough to adversely affect the query.

    I would start by updating stats for each table consumed in the query,and then look at the execution plan again. it should be different, and the query faster, in my experience.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • It sounds like one or more of the join columns are missing from whatever index is chosen, which would go some way to explaining the poor row count estimate and the choice of a non-covering index. The estimated number of key lookups is quite modest. Check your join/filter columns for the "many" table against the seek and residual predicates of the operator used to read the index (NOT the key lookup).

    I'd also create a new covering index tuned specifically for this query, tweaking as necessary, to establish a baseline for the query, i.e. the best that it can get. Avoid propagating the new index to prod if you can: check the new index against what you already have on the table to see if one of them can be changed to functionally match it without drastically changing existing functionality (if you're lucky, that could be an INCLUDE or a trailing-edge key column).

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Everything Lowell says.

    I'm assuming that you're getting a LOOPS join, hence the estimated number of rows is 1? If so, that estimate isn't necessarily off. You need to combine that estimate with the estimated number of executions, then compare the estimated number of executions with the actual number of executions. Assuming those are similar, there's a good chance the optimizer is making good choices. If, on the other hand, the actual rows you list, 153601, is also the actual number of executions, which is way off from the 287 estimated, then you possibly have a severe stats problem (see what Lowell said).

    The concern here is not necessarily the number of executions, but the fact that you have to do a key lookup. That means your non-clustered index isn't covering, or the clustered index isn't adequate. The trick would be to either make the non-clustered index covering, which will eliminate the key lookup, or figure out what you need to do to use the clustered index (not seeing the structure & queries involved, I don't know if you have a good cluster or not).

    Regarding the stats, also remember that there are things that prevent good statistics use, like functions on columns, table variables, and multi-statement, table-valued, user-defined functions. All these can affect the row estimates. You can also get bad row estimates because of unenforced foreign keys (an FK with NOCHECK option). These can also negatively impact performance because of plan choices. I suspect it's not the problem here, but it is stuff to consider.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Chris is also offering good advice. It just wasn't there when I started typing.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Lowell (8/17/2016)


    whenever the difference between estimated rows and actual rows is off by a lot(order of magnitude or more,

    it usually means the statistics are out of date enough to adversely affect the query.

    I would start by updating stats for each table consumed in the query,and then look at the execution plan again. it should be different, and the query faster, in my experience.

    I've already gone the update stats route. I even did it with FULLSCAN. This is the post-updated stats version.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • ChrisM@Work (8/17/2016)


    It sounds like one or more of the join columns are missing from whatever index is chosen, which would go some way to explaining the poor row count estimate and the choice of a non-covering index. The estimated number of key lookups is quite modest. Check your join/filter columns for the "many" table against the seek and residual predicates of the operator used to read the index (NOT the key lookup).

    I'd also create a new covering index tuned specifically for this query, tweaking as necessary, to establish a baseline for the query, i.e. the best that it can get. Avoid propagating the new index to prod if you can: check the new index against what you already have on the table to see if one of them can be changed to functionally match it without drastically changing existing functionality (if you're lucky, that could be an INCLUDE or a trailing-edge key column).

    This is a vendor database I'm dealing with. Fortunately I'm querying off the reporting copy (which means I can make changes if I absolutely have to and remember to include them in the restore jobs). Unfortunately, I've tried adding a non-clustered index. Didn't work. But I also put the key column up front, so I'll try switching it out for a trailing key column (since I'm not querying on it) and, if that doesn't work, an INCLUDE.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Gotta love the new CE! Is there an index that exists on the table that you think it should be using but isn't? If you can get rid of the key lookup by throwing the column(s) into an INCLUDE clause on an existing index that could dramatically help the optimizer.

  • How are rows selected from the big left-joined table - only the join, or is there also a filter in the WHERE clause?


    [font="Arial"]Low-hanging fruit picker and defender of the moggies[/font]

    For better assistance in answering your questions, please read this[/url].


    Understanding and using APPLY, (I)[/url] and (II)[/url] Paul White[/url]

    Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]

  • ChrisM@home (8/17/2016)


    How are rows selected from the big left-joined table - only the join, or is there also a filter in the WHERE clause?

    Only on the join. But it just occurred to me that I wrote this query several years back before any of us fully understood how the vendor databases were set up / worked. Which means it's possible that I've been joining on the wrong column or left a join column out and only noticed it when the record set got too big to play well with the new SQL.

    So that's the current rabbit I'm chasing. Think Adventure Works and tables Order and OrderDetails. The join I'm doing is Order to OrderDetailsHistory, which has a line for every change in the table and some columns that aren't in OrderDetails (hence the join). I don't see the OrderDetails identifier in OrderDetailsHistory (which would make my life soooo much easier), but I do see OrderID (the current join column) and OrderDetails.InvoiceCode in the history table.

    I'm going to test that out real quick.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Josh B (8/17/2016)


    Gotta love the new CE! Is there an index that exists on the table that you think it should be using but isn't? If you can get rid of the key lookup by throwing the column(s) into an INCLUDE clause on an existing index that could dramatically help the optimizer.

    This is likely to be a change in the optimizer, not the CE. This is on 2012. The new CE came in with 2014.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Good point Grant....I missed that it was on 2012. :crazy:

  • If you could have posted a .sqlplan you would have done so in your first post. What about a picture of the properties sheet for the data reader operator of big left-joined table? You could smudge out proprietary stuff.

    Next question: is big left-joined table joined to any other table in the query?


    [font="Arial"]Low-hanging fruit picker and defender of the moggies[/font]

    For better assistance in answering your questions, please read this[/url].


    Understanding and using APPLY, (I)[/url] and (II)[/url] Paul White[/url]

    Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]

  • ChrisM@home (8/17/2016)


    If you could have posted a .sqlplan you would have done so in your first post. What about a picture of the properties sheet for the data reader operator of big left-joined table? You could smudge out proprietary stuff.

    Is this what you meant? Or were you looking for the whole thing and the names smudged out below?

    Next question: is big left-joined table joined to any other table in the query?

    No. That's also killing me. I just need one freaking column from this monster and if it were in the details table, I would grab it there. But it's not. I've got a call into the vendor to figure out if I can compute this column on the fly (or find it elsewhere in a table I'm already pulling from).

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin (8/17/2016)


    ChrisM@home (8/17/2016)


    If you could have posted a .sqlplan you would have done so in your first post. What about a picture of the properties sheet for the data reader operator of big left-joined table? You could smudge out proprietary stuff.

    Is this what you meant? Or were you looking for the whole thing and the names smudged out below?

    Next question: is big left-joined table joined to any other table in the query?

    No. That's also killing me. I just need one freaking column from this monster and if it were in the details table, I would grab it there. But it's not. I've got a call into the vendor to figure out if I can compute this column on the fly (or find it elsewhere in a table I'm already pulling from).

    Almost. That's a hover. Rightclick for a more detailed experience - then you'll understand what I meant by smudging. You might have to widen the properties column.


    [font="Arial"]Low-hanging fruit picker and defender of the moggies[/font]

    For better assistance in answering your questions, please read this[/url].


    Understanding and using APPLY, (I)[/url] and (II)[/url] Paul White[/url]

    Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]

Viewing 15 posts - 1 through 15 (of 31 total)

You must be logged in to reply to this topic. Login to reply