Covering index is too long, but needed ... what to do?

  • Hello,

    I have a stored procedure which is running slowly. Looking at the execution plan in the query analyzer I saw that the subtree cost is 3.5 and there is a bookmark lookup that takes up 65% of the cost.

    I added a covering index, and surely the subtree cost dropped to 1.75 (although logical reads increased, which is another puzzle).

    The problem is that the covering index I added includes many columns [4 varchar and a few ints] (had no choice, since the query is using all of them, and I need to cover them all with an index), and they add up to ~ 1200 bytes(?). Although the index was allowed to be added, the warning message came us saying that if the length of the index exceeds 900, inserts might fail. I believe they will fail if the data inserted in a row will exceed 900 bytes (as row cannot be split between 2 pages I believe).

    So, although I don't expect any data entered to exceed 900 ... but who knows - it seems unsafe to have this index.

    What do people do in this case? Just don't add an index and live with higher query costs?

    Please advise

    Thanks in advance!

  • I personally wouldn't risk adding the index. Unless the documentation tells it never can exceed 900 bytes ( odd, when the fields are larger).

  • So... after you added the index, did you get index seeks or just scans? Also, never trust the execution plan by itself... did you turn statistics on or run profiler against the query before and after? If not, you have no concrete proof that adding the index actually helped overall performance or not.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks for all the suggestions.

    I was able to fix the problem without adding an index, but just re-writing the query.

    It was strange how a very simple change in the query made it much faster.

    Thanks again!

  • Glad to hear that the rewrite helped.

    Very much agree with Jeff in that looking at Execution plans and query costs alone can be dangerous especially when changing things bring the cost down and IO up. It's one thing when those data pages are in memory but if you are having to go to disk to get data, that is always going to drastically impact performance. So, less reads with less cost is always nicer. πŸ™‚

    David

    @SQLTentmaker

    β€œHe is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • sql_er (8/25/2008)


    Thanks for all the suggestions.

    I was able to fix the problem without adding an index, but just re-writing the query.

    It was strange how a very simple change in the query made it much faster.

    Thanks again!

    Heh... two way steet here. πŸ˜€ What was the "very simple change in the query" you made?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks for all the comments and suggestions.

    My approach was as such:

    1. Run Server Side Tracing for 15 mins on 5 almost identical dedicated sql servers (Subscribers in Transactional replication). This resulted in ~35 executions of the stored procedure in question

    2. Change sp to its new (optimized) version

    3. Run Server Side Tracing for 15 mins again, similarly

    On 4 out of 5, there was a drastic improvement in both - average duration and average reads. I think 5th one had some IO issues at the time, so we ignored it.

    The change was basically an INNER JOIN order. I had 3 tables in a query, and there were 2 ways to JOIN between them. When the query was originally written , I did not pay attention to it. I just chose one way randomly. However, as I found out now, joining another would would allow to use an INDEX on one of the biggest tables of the 3, thereby totally changing the execution plan, bringing down the sub-tree cost from 3.5 to 0.1 and the logical reads from 110,000 to less than 10,000.

    Thank you!

  • It was strange how a very simple change in the query made it much faster.

    YOU may think so, but the regulars here won't. πŸ™‚

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • sql_er (8/25/2008)


    I was able to fix the problem without adding an index, but just re-writing the query.

    It was strange how a very simple change in the query made it much faster.

    I agree with what Kevin said... the "regulars" won't think it strange at all,. Heh... it's what they usually recommend!

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • sql_er (8/27/2008)


    Thanks for all the comments and suggestions.

    My approach was as such:

    1. Run Server Side Tracing for 15 mins on 5 almost identical dedicated sql servers (Subscribers in Transactional replication). This resulted in ~35 executions of the stored procedure in question

    2. Change sp to its new (optimized) version

    3. Run Server Side Tracing for 15 mins again, similarly

    On 4 out of 5, there was a drastic improvement in both - average duration and average reads. I think 5th one had some IO issues at the time, so we ignored it.

    The change was basically an INNER JOIN order. I had 3 tables in a query, and there were 2 ways to JOIN between them. When the query was originally written , I did not pay attention to it. I just chose one way randomly. However, as I found out now, joining another would would allow to use an INDEX on one of the biggest tables of the 3, thereby totally changing the execution plan, bringing down the sub-tree cost from 3.5 to 0.1 and the logical reads from 110,000 to less than 10,000.

    Thank you!

    Very cool feed back... thanks a ton! πŸ™‚

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Also see similar behaviour here

    http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=109678


    N 56Β°04'39.16"
    E 12Β°55'05.25"

  • Yes - Thanks Peso and others!

  • sql_er (8/25/2008)


    Thanks for all the suggestions.

    I was able to fix the problem without adding an index, but just re-writing the query.

    It was strange how a very simple change in the query made it much faster.

    Thanks again!

    It can have a tremendous difference even when the queries are logically identical.

    ---
    Timothy A Wiseman
    SQL Blog: http://timothyawiseman.wordpress.com/

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply