• TheSQLGuru (8/8/2014)


    ScottPletcher (8/8/2014)


    TheSQLGuru (8/8/2014)


    ScottPletcher (8/8/2014)


    My first choice would be to use RECOMPILE and just force SQL to rebuild a plan every time.

    But, if a HASH join is the "good" plan, forcing a HASH join is much safer overall than forcing a LOOP join. You might try that for a range of values and verify that it works OK across all of them. This, too, may not be the "best" solution, but it should be a workable solution.

    I am curious why you say HASH force would be safer. I would say just the opposite...

    My thinking is:

    LOOP is extremely -- even prohibitively -- expensive on a very large number of rows.

    HASH might not be ideal for a smaller number of rows, but it shouldn't be awful either.

    Expensive in lots of logical IOs, yet. But those can be exceedingly quick due to cached iterative hits on same page for multiple rows. More importantly from my experience is the page locks that will (hopefully) be taken which can DRASTICALLY improve concurrency. Those blocking index/table scans are a killer from that perspective.

    I've just not had the experience of loops being "exceeding quick" once the number of rows gets too large. Indeed, to me it seems that often the only reason SQL is using a loop is that it couldn't accurately pre-determine the cardinality of rows.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.