Option Recompile is a Magic Turbo Button That Actually Works.

T-SQL
22 Comments

I didn’t say that – Guy Glantser did.

Guy Glantser is an Israeli SQL Server guru with a ton of great presentations on YouTube. I’ve had the privilege of hanging out with him in person a bunch of times over the year, and I’ll always get excited to do it again. He’s not just smart, but he’s friendly and funny as hell.

Matan Yungman, Guy Glantser, and I

In the most recent SQL Server Radio podcast, Guy explained what OPTION RECOMPILE does, and basically suggested that everyone should be comfortable using it outside of OLTP or high-CPU environments. In fact, if you’re working on data warehouses, Guy said, “Your default should be to add OPTION (RECOMPILE) on every query.”

Right here is the part where you’re expecting me to say, “But I disagree with Guy on this one.”

But I’m on Team Guy here.

For years, it’s been a meme that people constantly ask Office Hours questions about, “Why is this same query sometimes fast and sometimes slow?” And as we dig into their question, I inevitably have to talk about Erland Sommarskog’s excellent Slow in the App, Fast in SSMS, and explain the concept of parameter sniffing. It’s the year 2024, and most of you are sick and tired of hearing about it, and yet there are new people coming into our industry every year who have mind-blowing moments when they learn about the problems of plan reuse.

In the past, I’ve even heard Guy proposing a server-level switch to simply turn off plan caching, and compile every query that comes in.

I think that’s overkill, but I would really like a Cost Threshold for Recompile (CTfR) setting. In data warehouses, for example, I might set CTfR to something low like 50 query bucks. I asked Microsoft for it a while back, but no official response so far.

Previous Post
What Happens When Multiple Queries Compile at Once?
Next Post
[Video] Office Hours: Speed Round

22 Comments. Leave new

  • Dave Wentzel
    March 12, 2024 3:27 pm

    totally agree. This is one of those things where inertia (we do things this way because we’ve always done things this way), lore, and FUD cause folks to not consider things that should be obvious. 20-25 years ago the same could be said for folks that ran dbcc dbreindex EVERY NIGHT and never thought of fixing the CAUSE of the fragmentation instead of the symptom.

    I’d even go out on a limb and say there may be cases where _something like this_ is valuable in OLTP-style systems. Erland’s article has great examples of anti-patterns where developers try to create generic sprocs so there is affinity to the webpage UI (dynamic search conditions). While that makes sense to the .net developer, it’s not great for performance. Erland has lots of great solutions but this is a situation where the cost/benefit of refactoring a whole bunch of code might not make sense IF option recompile is an option and you understand the risks and do the analysis due diligence.

    Unfortunately too many times folks hear about OPTION RECOMPILE then do a google search and immediately come to the conclusion that this is BAD when they aren’t thinking through the tradeoffs.

    Now you just need to convince folks that WITH (NOLOCK) is fine. hahaha

    Reply
    • Dirty reads are almost never “fine.” When they are acceptable, the situations where they can actually improve anything are very specific and its use precludes use of RCSI that would not only likely be more effective, but also be more consistent.

      If looking for allocation order scans, I am not so sure those are that much an improvement on SSD, and it’s certainly not going to matter for column store in data warehouses where those types of scans are most likely to happen.

      Reply
      • if you say so Keith.

        I’ll quote your first sentence…”almost never fine.” Kinda begs the question doesn’t it Keith? When might they be acceptable? Well, there are actually plenty of use cases. We don’t always have the luxury of RCSI and we don’t always even care if we read some dirty data. Sometimes a fuzzy answer is good enough. Sometimes understanding the direction of the move is more important than understanding the magnitude. Sometimes simply understanding the tradeoffs and recognizing the risks is _good enough_. There are plenty of bloggers who have written about good use cases they’ve found for NOLOCK. The problem is they’ve been harangued and told they are stupid and they’ve been pushed to the bottom of the google search result stack. But hey, if it solved a problem for them without introducing unforeseen risk…is it a worthy solution? I say YES.

        You actually proved my point about being overly dogmatic. Maybe…just _maybe_ … there are acceptable uses for things that appear stupid and dangerous upon first examination. The problem is folks equate “almost never “fine” with “should NEVER EVER be used”.

        Reply
        • It doesn’t beg the question.

          Setting allocation order unit scans aside, the scenario where read uncommitted helps, is in a transactionally active database that can also tolerate the blocking that the report generates – and it will block other queries if the database is transactionally active. That is an extremely narrow circumstance.

          If the database is not transactionally active, read uncommitted isolation does absolutely nothing, it performs exactly the same as read committed.

          Reply
        • The difference is OPTION RECOMPILE has plenty of legitimate use cases while dirty reads have very few. Also, OPTION RECOMPILE won’t give wrong results, but dirty reads will. NOLOCK is so often misused that the best advice is to just not use it.

          Reply
          • Agreed. And unless the query writer knows exactly what read uncommitted does, why it works, the other conditions of the database that make it work in some instances, the other things that are going to happen in SQL because of it, the specific data anomalies that can be expected, and every single one of those issues are tolerable or otherwise in net, a better outcome, it is a misuse.

            Getting a report to “go faster/work,” and “maybe some of the data is bad but it is tolerable” does not qualify.

  • Eitan Blumin
    March 12, 2024 4:09 pm

    I have to say… I feel kind of left out, Brent ?

    Reply
  • Holy moly I feel like I may have to unlearn many years of “Stop using Option Recompile as a bandaid to fix a bullet hole” (if Taylor Swift was a DBA).

    I mean I get it, and it makes sense. But by golly that’s a lot of retraining of my instinct to dive in and try to fix the underlying issue.

    Thanks for sharing!

    Reply
  • I’m with the default “WITH recompile” unless your in a strict OLTP-high volume environment

    As we have some cpu / time to spare, the impact of the recompile resources is typically much less than impact of a bad query plan in terms of cpu or I/O or rewrite efforts.

    The query store can handle the non-recompile queries?

    Reply
  • I get what you’re saying. “Don’t Recompile” is a bit like Cost Threshold for Parallelism = 5. Yes, it made sense (or in the case of CTFP, might have) on the Pentium classic – but what cores are you running on now! It’s a long time since I’ve seen a CPU constrained SQL Server box (well, I lie, last week on a large data migration using multiple streams of expensive pass-through queries – but it’s rare).
    In terms of the idea of addressing sniffing with a “Recompile At Cost” setting though. Er, would that not rely on the estimated plan there in some way? In most cases I’ve encountered the issue being a problem, the crux is a cheap looking plan based on a handful or rows getting more expensive. (Although it would be something I would be happy to try as a last resort with poor 3rd party software where I have no control over the code).

    Reply
  • Please… don’t let Microsoft create any more “advanced” settings. It don’t trust them after the 18 to 30% slowness they built into SQL Server 2019 and continued with in 2022.

    New add should say, “SQL Server 2022… it just runs slower”.

    Reply
  • Please tell me that SQL Server will be adding an option to turn off execution plan caching.
    It would save me from a redesign on a very functional ETL process.
    Currently the light at the end of the tunnel has been identified as locomotive.

    Reply
    • I am slightly curious how plan caching could require a whole redesign of ETL. Have you tried trace flags 4139 or 4139?

      Reply
      • or 4136*

        Reply
      • Keith, The ETL processes is built on the idea that each batch has it’s own table. Of course the BatchID is the table name. So every query in the ETL process is unique because it is run once against table with a unique BatchID. Parameterization = Forced seemed to be a good first step. I will look into Trace Flag 4136. Thanks

        Reply
    • I could tell you that, but I’d be lying. 😉

      You can use Resource Governor on Enterprise Edition, put the ETL process in its own resource group, and then run DBCC FREEPROCCACHE for that resource group if that helps.

      Reply
  • Guy Glantser
    March 14, 2024 8:05 am

    Thanks, Brent!

    I love your idea about Cost Threshold for Recompile.

    Reply
  • All of a sudden my idea to have a job that runs every 5-10 minutes and just dumps the cache doesn’t seem so crazy… 😉

    Reply
  • I have been creating and maintaining data warehouses and data warehouse ETL jobs in DTS and SSIS. I have only rarely used the WITH RECOMPILE in a query. My experience does not bear out the primary point of this post.

    Reply
  • Mark Freeman
    April 18, 2024 5:49 pm

    The databases I work with (used by OLTP applications) most often are much more limited by CPU than IO (even in Azure SQL Database where the IO is slower than for SQL Server in a well configured VM), so the idea of adding lots of recompiles is intimidating. We’ve tried that in some cases and been overwhelmed by the recompile cost and reverted the change.

    However, where we have statements with a relatively low execution count per day and high CPU cost per execution (where the compile cost may not be a significant percentage), OPTION(RECOMPILE) may be of benefit. Like any tool, you have to know when it is and is not appropriate for the task.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.