• Well, there are a couple points there.

    1) The optimizer is not perfect, and does make mistakes. Especially with many joins and complex predicates, the number of ways to access the data can be so large that the optimizer does not have time to evaluate enough plans to pick a good plan, much less the best one.

    2) If the statistics used by the optimizer are incorrect, then no amount of intelligence in the optimizer will guarantee a good plan, because it's starting with incorrect assumptions about the data.

    In this case, for example, the estimate for the rows returned by the following WHERE clause is wrong in both the fast and the slow plan:

    okeyname = 'LIBELART' and (okeydata = 'Eingangsrechnung' or okeydata = 'Eingangsgutschrift')

    By luck it happens that in the plan that is returning more quickly for you, the incorrect estimate is for that seek on the inside of a nested loop join, and it doesn't impact the query as badly as when that seek is the outer input.

    As far as SQL Server is concerned, there is very little difference between the plans, but that's based on incorrect estimates. If the data were distributed the way SQL Server thinks it is distributed, then both plans would work about as well.

    I'd check statistics on the index that's being used (KH_ix_keydata_keyname), and see if they need to be updated.

    This can also happen if the values in question are not values used as a RANGE_HI_KEY for a histogram step, as then SQL Server has to guess at the number of rows based on the average number of rows for each distinct value in that histogram step. Sometimes that average and the actual number of rows for a value can be quite different.

    In that case, updating statistics won't really help, although there are some things that can help (filtered statistics for values often used in queries, for example).

    At any rate, there's no getting around the fact that sometimes the optimizer needs a little help from us carbon-based life forms. 'Tis the nature of the beast 🙂

    Cheers!