Why is my SELECT query slower when the table is indexed?

  • You haven't posted everything requested. Read the article I referred you to, you know, the first one in my signature block. You should also read the second one as well.

  • Lynn Pettis (11/3/2011)


    Read the first two articles I reference below in my signature block.

    OK, I've read through this and can now see how I can send sample data utilizing INSERT .. SELECT code.

    However, two things, I cannot see a guideline relating to exactly how much sample data I should post and secondly I didn't spot anything about sending execution plans (as you requested).

    ANy more help or guidance much appreciated. Apologies once again, I am very new to both this forum and SQL, so I am grateful for any gentle nudges in the right direction where needed 🙂

  • Execution plans are XML saved as a .SQLPLAN file. You can generate them, save them, and attach them, as is. All the data within will be transmitted. The preferred method is for actual execution plans because they have a few runtime metrics that the estimated plans do not (and, if you're experiencing recompiles, not an issue with your query I'm sure, the actual plan can be different than the estimated plan).

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Lynn Pettis (11/5/2011)


    You haven't posted everything requested. Read the article I referred you to, you know, the first one in my signature block. You should also read the second one as well.

    Sample Data -- Enough to properly represent the problem domain, you have to decide how much this is.

    How to post the execution plan, directly from the second article:

    Saving the execution plan <<-- Go to this section to see the figures.

    The method of saving and posting the version of SQL that’s being used. Not the version of the client tools, but the version of the database server.

    SQL 2005 and above

    A very welcome feature that was added to SQL 2005 was the ability to save the execution plan of a query in an XML format, and be able to load that xml file into management studio to see the full graphical plan.

    To save the execution plan, click the “Include Actual Execution Plan” in the toolbar and then run the query. Once the query has completed, right click on the plan and select “Save Execution Plan As...” then save the file as a .sqlplan file.

    If the query takes too long to run (or doesn’t complete), then the estimated execution plan can be posted instead. This is not ideal as there’s a fair bit of info missing from the estimated plan, so it should only be used when the actual plan is not an option. To get the estimated plan, click the toolbar button “Display Estimated Execution Plan” and then save the resulting plan in the same way.

    Once the plan has been saved, zip it and attach the zipped file to your post using the Attachments option found below the main posting text box.

  • Also, once you decide how much sample data to post, be sure to post the expected results based on the sample data.

  • Lynn Pettis (11/7/2011)


    Also, once you decide how much sample data to post, be sure to post the expected results based on the sample data.

    Thanks for your comphrensive previous post. I will check out the other articles you mentioned.

    As far as including enough sample data - well, I'm not sure. The source table has 88 million rows which is obviously far too much. I'm not sure what limits there are to the size of a post on this forum (would a 1000 row be considered too much generally speaking? Or, are 10,000 row posts acceptable?).

  • raotor (11/7/2011)


    Lynn Pettis (11/7/2011)


    Also, once you decide how much sample data to post, be sure to post the expected results based on the sample data.

    Thanks for your comphrensive previous post. I will check out the other articles you mentioned.

    As far as including enough sample data - well, I'm not sure. The source table has 88 million rows which is obviously far too much. I'm not sure what limits there are to the size of a post on this forum (would a 1000 row be considered too much generally speaking? Or, are 10,000 row posts acceptable?).

    As a zipped attachement?? You have a 20 MB limit.

  • Ninja's_RGR'us (11/7/2011)


    raotor (11/7/2011)


    Lynn Pettis (11/7/2011)


    Also, once you decide how much sample data to post, be sure to post the expected results based on the sample data.

    Thanks for your comphrensive previous post. I will check out the other articles you mentioned.

    As far as including enough sample data - well, I'm not sure. The source table has 88 million rows which is obviously far too much. I'm not sure what limits there are to the size of a post on this forum (would a 1000 row be considered too much generally speaking? Or, are 10,000 row posts acceptable?).

    As a zipped attachement?? You have a 20 MB limit.

    Thanks for that.

    however, I am still having difficulty determining how much sample data to provide. I've tested my query with a 10,000 and a 25,000 row sample set and both come back in under a second. If it's the quantity of sample data that will help illustrate the difference between an indexed and unindexed performance differential then I can't send enough to highlight my issue.

    On the other hand, if it's the execution plans I've saved on the original source tables 88 million rows and it's this that will help others here analyse why the uninmdexed table is faster to query then perhaps the execution plans should be all that I attach?

    Any further advice or suggestions on this isue would be appreciated.

  • Then you need to find the tipping point between fast and slow.

    Then provide the actual execution plan for both versions (fast & slow).

    Once you have that we can rebuild sample data on our end. But we obviously need the table, keys & indexes script for that.

  • Ninja's_RGR'us (11/8/2011)


    Then you need to find the tipping point between fast and slow.

    Then provide the actual execution plan for both versions (fast & slow).

    Once you have that we can rebuild sample data on our end. But we obviously need the table, keys & indexes script for that.

    OK, here's everything (I think) sent as an attachment.

    Hopefully the extracted files should be self-explanatory. I've included several scripts that create and load data into the table as well as others that create the indexes and primary key as well as one that removes all indexes and keys previously created for rerun purposes.

    Three execution plans are included which were run on my 88 million row version of the TempSales table - one for the unindexed version (the fastest version) and one each for the indexed on Price column and one for the table with a primary key (the slowest version).

    There are comments and notes within the scripts to help.

    I've provided a 10,000 row data load for this table, but as explained in the 'SetupTableAndData' script you will most likely need to generate multiple copies of the provided rows to get to the point where the differences in query execution speed show.

    Well, I hope I've included all that's needed.

  • It would be better if you could post the number of records in the table.

  • KrishDBA (11/9/2011)


    It would be better if you could post the number of records in the table.

    If I understand correctly that you would like me to attach ALL the rows in the table, then how can I do this?

    There are 88,604,672 rows in the TempSales table and trying to post them in the same way as with my 10,000 row sample wouldn't work due to size restrictions on attachments etc.

    Please advise 🙂

  • take a look at my sample testrig earlier in this thread.

    ________________________________________________________________
    you can lead a user to data....but you cannot make them think
    and remember....every day is a school day

  • J Livingston SQL (11/10/2011)


    take a look at my sample testrig earlier in this thread.

    I've copied your test script into SSMS and executed it. However, it appears to run extremely slow. Thus far, the script has been running for 3.5 hours!

    I can see that something is happening as the TempDB log file is slowly growing (around the 750Mb mark at 3.5 hour script execution duration).

    In additiom, I'm not sure that generating data with random price values would sufficiently represent the data I have. The price colum has large numbers of duplicate values and as such would probably affect the speed of any index using this column ... I think. SOmething about lack of selectivity as I recall ?

    Then again, I am new to SQL, so I could just be spouting nonsense! 🙂

  • Your test env is underpowered.

    Try reducing to 100 000 rows see if that helps. Going much under that is not going to be really a good test.

    It runs in 10 secs on my servers as well.

    3 GB ram and 1 cpu. Nothing massively powerful as you can see.

Viewing 15 posts - 16 through 30 (of 55 total)

You must be logged in to reply to this topic. Login to reply