• mark.kristensen (2/28/2014)


    Hi Chris,

    Thanks for the leads, but even with recompile and changed index, the time is the same.

    In relation to your first post, then I will not likely get anymore than a few 1000 rows as result (maybe 10t-20t), as it will not include more than 5 trade lanes and there are around 250.000 different tradelanes in the table. Does that make it easier to optimize for that specific purpose?

    Furthermore, where can I see if the CPU or the disk (or something third) is the bottleneck?

    Thanks for the feedback, Mark. Can you post the actual execution plan please?

    Efficient filtering for those tradelanes is the first thing I'd pursue. You currently have way too many rows coming off the big table. That stacks up as logical reads if the data is in cache, physical reads if not.

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden