Blog Post

TechEd - Execution Plans


Randy Dyess of Solid Quality Mentors had a session and since I'd just gotten a copy of Grant Fritchey's book with the same name, I wanted to check it out.

There are 3 types of plans: graphical, text, and XML.

Graphical is cool, looks good, but it doesn't have enough information and once you are in a non-trivial situation, you have to scroll around and it's hard to hold all that information in your mind.

Text plans are what you need to learn to read. Especially in grid mode, we deal with this format for data all the time, so this makes lots of sense for many DBAs, so spend some time working on this. Once you get used to reading text plans, likely this will be your preferred method.

XML - Also hard to read, perhaps need to know XQuery, might be worth learning if you want to know more about XML. Randy doesn't like this, but I think it's worth reading about XML a bit and learning how it works since I think we'll have this data around for a long time. However reading XML isn't easy as things lik whitespace and other characters are encoded. Therefore it's not "human readable" and needs to be consumed by something. However XQuery should help here.

All plans can generate estimated or actual plans. Sometims you can't get actuals since it would be hours to execute the query, or you can't run it on the production system and need to know, or have a good idea of what's happening. Estimated plans help here, but they are not guarenteed to be the same as the actual plan.

One problem with estimated plans is temporary tables. You might need to comment one out to get the plan, and it might not be a good representation of the final plan. If you create a permanent work table and remove it, the same thing happens.

Text Plans

In the text plans, follow the parent column in your plan. It contains numbers, and the highest number happens first. The recommendation is to start tuning with the highest numbers, tuning there first since a change there will flow down to lower numbers. An observation from Randy that SQL Server seems to do the hardest or more resource intensive things first.



Statistics IO gives you IO statistics from the query. You get more informaiton in 2008 as this has been enhanced. This is really read and scan information that you can use to determine where you might think about tuning the query. A big thing here is to try and reduce physical reads, but you need to run this a few times and be sure that the physical reads aren't the first ones to populate the cache. Once the data is in the cache, it might perform much better. 

In this case, Randy looks to reduce logical reads. If those can be reduced, even if the data is not in cache, physical reads might be reduced.

Execution plans can also be pulled from memory. You can grab the plan from the cache, which is actually what is being used by SQL Server to execute the query. DMVs are available to help here.

How do you tune?

Randy's Goals

  • Optimize duration / CPU usage
  • Optimize IO usage

Tracking duration meaning physical clock time, helps to determine how busy the system is. It's good to know an average time for procedures to execute as a baseline. Know that this will grow over time (potentially) As you add data.

CPU usage - Track this since it's a limited resource. Knowing the top usage queries can allow you to focus your efforts on those queries that are used often and impact the CPU.

Large, complex queries usually have large batches of code, each of which can be tuned as a mini-execution plan.

Top Slowdowns

  • Table and clustered index scans
  • Index scans v index seeks
  • Bookmark lookups
  • Join methods
  • Sorts
  • Compute Scalar

Table and clustered index scans are the leading cause of bad performing queries. Not always, sometimes you want this, but it is often bad. A CI scan is a table scan really, but could have better performance since you don't have as many random IOs as on a heap. Randy has this is the number one thing to tune when he sees this. If he can remove this, it often helps. Partial scans are sometimes ok, such as scanning a set of data all grouped together.

Index scans are beter than table scans, but still costly. This usually means a lot of rows ned to be read for the query. Investigate to see if you can turn these into seeks. CI seeks are the best things to have since they hit the data, but only specific rows.

Bookmark Lookups - A non-clustered index is used, but then you must use the CI key to go read the CI and get the data. In 2005 the INCLUDE columns can help remove these. In 2005/2008, you don't see this operator. You see RID (row identifier) and then a Lookup to get the data. These are hard to resolve, you might include 3-5 columns, but not more. You want to review the columns used by indexes to see if you can cover more queries. Or perhaps see if you really need to return that column and perhaps remove the columns. Don't SELECT *, that can cause this. You might also try creating aditional indexes  to be used for joins or perhaps, if appropriate for that table, change the NCI to a CI. This shouldn't be done lightly, especially if you've spent time considering what the best CI should be.

Be sure that your NCI doesn't duplicate the CI keys. They are already included.

Join Methods - SQL Server uses 3 types. Nested loop, Merge, Hash.

Nested loops are for smaller inputs, and has low memory usage. When you have a small and large tables, this often occurs. You don't want this between two large tables. Nested loops may resort your data.

If you have two large tables, a merge join in better. It's for larger inputs, and middle in terms of memory being used.

The hash join is for large tables, uses lots of memory. A row from the first table is run through a hash algorithm and then stored. The hash buckets are stored in MemToLeave, which is only 256MB on 32 bit machines. Above that, the hash tables moves to tempdb, which is much slower.

Sorts are expensive. Don't add them unnecessarily. DISTINCT, UNION, ORDER BY, GROUP BY, aren't always needed. Don't add them out of habit. UNION ALL does not produce a sort because duplicates aren't removed. If possible, allow indexes to presort things.

Compute Scalar is a function being used by the optimizer. Try to reveiew for implicit functions and watch if you have this in a WHERE clause 




You rated this post out of 5. Change rating




You rated this post out of 5. Change rating