Research Paper Week: Plan Stitch: Harnessing the Best of Many Plans

Execution Plans
5 Comments

Those of you who follow me on Instagram, Twitter, or Facebook know I’m taking most of August off for vacation. Erika and I spent the last couple of weeks in Telluride, had a brief stop in Sacramento for SQL Saturday, and this week we’re touring wineries in Napa and Sonoma.

As part of recharging, I caught up on a bunch of research papers that I’d set aside to read, and I realized I should share my favorites with you. For each paper, I’m going to try to boil down why I found it interesting, and why you might find it interesting too.

Let’s start with a paper that finds a new solution to a really classic problem.

As long as I’ve been working playing with databases, execution plans have had one giant pain point: parameter sniffing. I constantly run into situations where the slightest change of a parameter can completely rework an entire execution plan into something wildly different.

In the Microsoft Research paper Plan Stitch: Harnessing the Best of Many Plans by Bailu Ding, Sudipto Das, Wentao Wu, Surajit Chaudhuri, and Vivek Narasayya (2018), the authors propose something really cool: watching an execution plan change over time, splitting it into parts, and then combining parts to form a SuperPlan™. (They don’t call it a SuperPlan™ – it just seems appropriate, right?)

They even went so far as to actually implement Plan Stitch in SQL Server. I hope you’re sitting down:

We implement Plan Stitch in C# as a component on top of Microsoft SQL Server without requiring any changes to the engine. SQL Server stores previously-executed plans and their execution statistics, including plan structures, properties of operators (e.g., sort columns), and operator-level execution costs [37, 41]. Plan Stitch passively monitors query executions and triggers the search for alternative plans when execution statistics for multiple plans for the same query are available. Our implementation follows the logical architecture in Figure 2, where we force the stitched plan using the plan hinting API supported in SQL Server [36, 46].

Footnote 37 refers to Query Store, and footnote 36 refers to plan guides.

This is completely wonderful and I am mesmerized. When “Automatic Tuning” came out, I was infuriated at the ridiculous use of that term to describe what that feature actually did (simple plan regression.) This, THIS, ladies and gentlemen, THIS is automatic tuning. This is what that phrase should be applied to. So the only question becomes – when this feature eventually ships, what is marketing going to call it? Hekatuning?

If you liked that white paper, you’re going to love these:

And for the record, yes, I consider writing blog posts like this to still be vacation, hahaha. I banged out all 5 of this week’s posts while sitting poolside with drinks. (That last word is definitely plural.)

View this post on Instagram

Afternoon reading by the pool

A post shared by Brent Ozar (@brento) on

Previous Post
DBA Training Plan 12: What Query Plans Are In Cache?
Next Post
Research Paper Week: Query Execution in Column-Oriented Database Systems

5 Comments. Leave new

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.