Query Optimiser not using optimum Index

  • Running SQL 2012 SP2

    I've got this query that runs in 30 seconds and returns about 24000. The table variable returns about 145 rows (no performance issue here), and the TransactionTbl table has 14.2 Million rows, a compound, clustered primary key, and 6 non-clustered indexes, none of which meet the needs of the query.

    declare @CltID varchar(15) = '12345'

    declare @TranDate datetime = '2015-07-25'

    declare @Ballance table

    (Ledger_Code varchar(4),

    AssetID varchar(32),

    CurrencyID varchar(3) )

    insert into @Ballance

    select * from GLSchemB.fn_GBCAA(@CltID, @TranDate)

    select G.Ledger_Code, G.AssetID, G.Amount , G.CurrencyID

    from GLSchemB.TransactionTbl g

    inner join @Ballance a

    on B.CurrencyID = G.CurrencyID and

    B.AssetID = G.AssetID and

    B.Ledger_Code = G.Ledger_Code

    where G.CltID = @CltIDo and

    G.Date <= @TranDate

    Actual execution plan shows SQL is doing an index seek, then a nested loop join, and then fetching the remaining data from the TransactionTbl using a Key Lookup.

    I designed a new indexes based on the query, which when I force it's usage via an index hint, reduces the run time to sub-second, but without the index hint the SQL optimiser won't use the new index, which looks like this:

    CREATE INDEX IX_Test on GLSchemB.TransactionTbl (CltID, Date) include (Ledger_Code, Amount, CurrencyID, AssetID)

    and I tried this:

    CREATE INDEX IX_Test on GLSchemB.TransactionTbl (CltID, Date, Ledger_Code, CurrencyID, AssetID) include (Amount)

    and even a full covering index!

    I did some testing, including disabling all indexes but the PK, and the optimiser tells me I've got a missing index and recommends I create one EXACTLY like the one I designed, but when I put my one back it doesn't use it.

    I though this may be due to fragmentation and/or stats being out of date, so I rebuilt the PK and my index, and the optimiser started using my index, doing an index seek and running sub-second. Thinking I had solved the problem I rebuilt all the indexes, testing after each one, and my index was used BUT as soon as I flushed the related query plan, the optimiser went back to using a less optimal index, with a seek and key lookup plan and taking 30 seconds.

    For now I've resorted to using the OPTION (TABLE HINT(G, INDEX(IX_Test))) to force this, but it's a work around only.

    Does anyone have any ideas why the optimiser would select a less optimal query plan? Any other things that could be causing this, or where I should be looking. I'm starting to think there is a bug in the optimiser that is weighting it too heavily in favour of the primary key.

    Leo

    Senior DBA Consultant

    SQL Services Ltd

    Nothing in life is ever so complicated that with a little work you can't make it more complicated!

    Leo
    Nothing in life is ever so complicated that with a little work it can't be made more complicated.

  • Not based on what you have posted so far. If you post the DDL for the table(s) including indexes and the execution plan (without the hint) as a *.sqlplan file.

  • I'll see what I can do, but I've had to change the schema information for this post as this is a client's database. I'll need to get their permission to publish anything that could be considered "identifiable". Otherwise I'll have to recreate the environment with new names etc.

    Leo

    Leo
    Nothing in life is ever so complicated that with a little work it can't be made more complicated.

  • Without knowing anything more than what we have here, my first guess would be that the estimates are way off. When it picks the plan that results in a 30 second duration, are there any operators with a big discrepancy between actual and estimated rows (first place I'd check would be the seek that needs a key lookup)?

    It's likely estimating a single row for the table variable, which could throw many of the other estimates rather far off the mark.

    Then again, it might be something completely different. Based just on what we have to work with, though, that's the first thing I'd check.

    Cheers!

    EDIT: Missed a word when typing it up. Fixed my sloppy typing.

  • I looked at the table variable row estimation as a possible issue, and by default the table variable estimates 1 row, but it returns 145. I would expect this to affect the join decision, not the index selection, particularly since the first two columns of the index are specifically the search predicates. Add to this that the all the join values, and all required columns all come from covering index. The main table row estimation was spot on.

    Note that the Nested Loop Join is really efficient, all the time is spent in the Index Seek and Key Lookup. When I force the index, it still uses a nested loop.

    I tried adding a PK to the table variable but it didn't change anything. From experience my understanding is this only starts to benefit when the number of rows gets above about 400 (really the size of the table). Any recommendations on changing this to a #Table? Would this improve the row estimation?

    Leo

    Leo
    Nothing in life is ever so complicated that with a little work it can't be made more complicated.

  • It could well be down to bad estimates.

    According to this White Paper: Statistics Used by the Query Optimizer in Microsoft SQL Server 2008, when you use local variables,

    If you use a local variable in a query predicate instead of a parameter or literal, the optimizer resorts to a reduced-quality estimate, or a guess for selectivity of the predicate. Use parameters or literals in the query instead of local variables, and the optimizer typically can pick a better query plan.

    I haven't found anything for 2012 which retracts that statement (and my minimal testing shows it to still be true)

    A simple way to check this is to use sp_executesql to run the query, and pass in your variables as parameters.

    Remove the table variable and use the TVF fn_GBCAA directly in the query.

    IS that TVF multi-statement ? If so, see if you can rewrite it as an inline TVF.

    MM



    select geometry::STGeomFromWKB(0x0106000000020000000103000000010000000B0000001000000000000840000000000000003DD8CCCCCCCCCC0840000000000000003DD8CCCCCCCCCC08408014AE47E17AFC3F040000000000104000CDCCCCCCCCEC3F9C999999999913408014AE47E17AFC3F9C99999999991340000000000000003D0000000000001440000000000000003D000000000000144000000000000000400400000000001040000000000000F03F100000000000084000000000000000401000000000000840000000000000003D0103000000010000000B000000000000000000143D000000000000003D009E99999999B93F000000000000003D009E99999999B93F8014AE47E17AFC3F400000000000F03F00CDCCCCCCCCEC3FA06666666666FE3F8014AE47E17AFC3FA06666666666FE3F000000000000003D1800000000000040000000000000003D18000000000000400000000000000040400000000000F03F000000000000F03F000000000000143D0000000000000040000000000000143D000000000000003D, 0);

  • Forum Etiquette: How to post Reporting Services problems
  • [/url]
  • Forum Etiquette: How to post data/code on a forum to get the best help - by Jeff Moden
  • [/url]
  • How to Post Performance Problems - by Gail Shaw
  • [/url]

  • Well, we can try to rule out the table variable estimate as a factor by trying the query using OPTION (RECOMPILE).

    By forcing a recompile when the query executes, the optimizer can have access to information about the table variable's cardinality (which it can't when, as is typical with queries against table variables, the plan is generated at compile time, prior to the table variable's being populated).

    If OPTION (RECOMPILE) yields an accurate estimate for the table variable, and the poor plan is still chosen, then there's something else going on. At least we can try to rule out that piece.

    Cheers!

  • Quick thought, you aren't giving the optimizer much to go on here, using a table value function to populate a table variable which by the way doesn't have any statistics for the optimizer and then joining it to the table. Suggest you change the function to an iTVFN (inline table value function) and cross apply it to the table instead of the join.

    😎

  • Shooting in the dark 🙂

    - What happens when you use option(recompile ) for the variable table, does it change the loop to a merge ( with sql 2012 SP2 you should be able to use the 2453 trace flag to get accurate estimation for variable tables)

    - Is the variable table outer table in the join ? if no, can you do a option (force order) after changing the physical order of the join.

    - What happens if you add index to the variable table that of the same order of the index that you believe that should be used with TransactionTBL ( I would like to think the join would transform to merge join)

  • Viewing 9 posts - 1 through 8 (of 8 total)

    You must be logged in to reply to this topic. Login to reply