• Thank you for the response, Ed!

    Some of the answers to your questions are in my original post, but it's quite messy, so no fault of yours! Here's the bits that concerned me in summary:

    The statistics were freshly updated using WITH FULLSCAN just before these queries were run; if there's some sort of delay between the statistics being updated and the new statistics being put into use, that would certainly explain it. Otherwise, I'm stumped!

    The actual query doesn't use temp tables; this is probably a gaffe on my part. I wanted to make usable DDL for others to work with, but the tables are actually permanent ones.

    This query isn't executed too frequently, at least, not yet; it's part of a report that's currently in testing, and it will be run about two dozen times a day once it's been approved and moved from testing. That report usually takes about 45 seconds to a minute to run; however, after analyzing the code and noticing these scans, I did some testing, as there were complaints about the speed.

    After adding in the bit with the variable, the execution time on the whole procedure dropped immensely, and it ran in 5 seconds or less, depending on how much data was present for the day being reported on.

    I should add that I cleared the cache several times and re-ran the query uncached until it was cached again, and the execution times returned to my previous findings.

    The really weird thing is that the seek does seem to produce a much lower runtime for the report, even though the acquisition of the variable's value does produce more reads than when the query is run without it. I'd guess that pulling the 11 million rows in AgentSessionConnectionTime and discarding the ones that won't be used is more costly than finding the minimum sessionid value for the query, but I don't know the specifics on that level.

    - 😀