• This does happen on occasion with any SQL server installation but the main reason actually is statistics.  In fact it is more than likely the sample level of the statistics.  What is happening is that the sample rate is probably at the default and on a large table (few million rows or more) this default rate is not adequate for the amount of data you have therefore SQL is not able to accurately determine the amount of rows that the input vars are going to fetch therefore it is using the old plan.

     Try something.

    1) Take this same procedure and duplicate the issue with OUT issuing a recompile in the proc.  Once you can confirm that you know exactly how to duplicate the issue look at the execution plan and the estimated row count for each (the fast query and the slow query).  Run the proc without the recompile in both cases and see how close it is.  What you will most likely find is that the version that takes longer than it should will have the highest delta between the estimated row count and the actual which is typically directly related to the fact that the statistic sample rate is not high enough for the tables involved.

    2) Now update the statistics on the tables involved, (update the stats for each index on every table used as well as the system statistics on the fields in those tables) and force it to use the FULL Scan sample rate.  Perform the same test in step 1 again and you should see that the execution plan will change and the estimated row count is more accurate.

    DBCC SHOW_STATISTICS ( Table_Name_Here , Index_Name_Here ) --run this and look at the Rows Sampled field and if this is a few million row table you will see that it is not actually sampling very much. In my case I did this on "small" 2 million row table and the rows sampled was 81k which equates to about 4% or so.

    Thanks..Richard