• GilaMonster - Monday, April 24, 2017 4:13 PM

    Hugo Kornelis - Monday, April 24, 2017 4:04 PM

    GilaMonster - Monday, April 24, 2017 3:40 PM

    Hugo Kornelis - Monday, April 24, 2017 9:33 AM

     And scanning a table with 1,000 integer columns takes more time, because the row size is bigger and hence less rows fit on the page.

    It's only going to read one row though, won't scan the whole table.
    Could be the index tree depth is a factor.

    Nope. Look at the plan. It's doing a Hash Match (Outer Join) between the two tables, that are each fully scanned. (With a bitmap pushed into the second scan to reduce the number of rows returned - they're still read but not returned to the scan operator unless they are a match or a hash collission with a match).
    A single full scan over both tables is more efficient than scanning the product table and doing a seek for each of its one million rows.

    Sorry, I read the article hours before replying, thought I remembered a loop join.

    In that case, absolutely it's the size of the rows that causing the time increase, nothing to do with the EXISTS itself. A join would show the same behaviour

    Why are you talking about scans when the last section of the article is about the compilation time? And once again.

    Query Store and ostress

    Never give up. There could be one more useful test. I am going to compile each query 100 times and take a look at statistics from Query Store. Again,

    • The queries are #1 and #3
    • The indexes are re-created
    • The count of columns in ProductOrder is 1024
    • The OPTION(RECOMPILE) is used