What I would want is for the compiler to see somehow (don't ask me how) that the compiled value and the runtime value aren't the same and had we compiled with the runtime value, a different plan would have been generated, and to instead either generate another plan or do something that is a bit more generic, that it may not be the best plan but it's not going to be the worst.
Not the compiler, because once a valid cached plan is found the optimiser is bypassed and the cached plan goes through some simple checks (has the schema changed, has the statistics changed) and then gets executed.
To do what you want there are two main options:
- Discard the cached plan any time the parameter value is different from the compile time one and recompile. This will make cached plans virtually useless as the optimiser would be recompiling for different parameter values even when the estimated row count is identical and the resultant plans identical
- Re-estimate the row counts for each different parameter value and recompile if there's a significant difference. What's significant? Well, varies per query. I've seen cases where a couple of rows difference will change the plan and other cases where the data volume can increase by a factor of 10 with the plan staying identical. So again here they'll have to estimate what will have an effect and they'll be wrong in some cases.
Either way will add overhead to each and every query running on the server and benefit what is usually a very small percentage of the queries.
If there was a simple solution to this, it would already be in the product. I've chatted with someone from the optimiser team on a couple of occasions about this kind of problem, its not simple and there are all sorts of tradeoffs to consider.
If i have 4 stored procedures, all doing the same sorts of things, returning filtered sets of data about 1 or more tables in a format suitable for the application calling it, then why might one suddenly start suffering from this problem? This is the crux of what i'm after, what is the tipping point?
Really hard to say, there are a bunch of possible causes.
Any chance you can get an execution plan of poor performance and another one with good performance from the same procedure? The answers will be in the plan, and it's hard to guess without that.
Have you checked if it's related to SET options rather than parameter values? The symptom of being slow when called by the app but fast in management studio sometimes suggests SET option differences as they are usually set differently from SSMS vs from ADO/ADO.net/jdbc/OLEDB/ODBC
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
We walk in the dark places no others will enter
We stand on the bridge and no one may pass