SQLServerCentral Article

An Orders of Magnitude Problem (T-SQL Tuning)


A few weeks ago, I was asked to attend a meeting to determine the next steps for a particularly nasty database problem. Every few months, the Finance department would process adjustments in their vendor system, which updated tax information. The process would take 40 minutes to complete. At one point last year, it wouldn't complete at all. After several hours of execution, the TempDB volume completely ran out of space, so the process was terminated. An extra terabyte of space was added for TempDB and the process was restarted. TempDB ran out of space again and had to be terminated. After working with Microsoft escalation engineers over a several month period, it was decided to move the SQL instance to a newer, faster machine since there was nothing that seemed to solve this problem. The processing of the troublesome query got better and it worked adequately for a few months. Eventually, the process started running indefinitely again. After 28 hours, it was terminated.

Once I got involved, I immediately wanted to know what was going on with the server while the query was executing. Were the disks responding slowly? Was the CPU at 100%? What were the top few wait types accumulating during the query execution? Although, after thinking about it while walking back to my desk, I guessed none of that would matter and that this was going to be an “orders of magnitude” problem caused by a bad query plan. I suspected that for two reasons. First, if a sub-optimal plan was generated by the optimizer, it may very well be doing a large amount of IO, which can happen if a seek turns into a scan or if an inefficient join order was selected by the optimizer. Second, the query performance changed dramatically from one run to the next. It wasn’t 5% or 10% slower, which might happen if there were more updates to process as compared to the last time updates were made or if the server was under more pressure from supporting additional database workloads. 

I got a copy of the code that was running slowly and generated an estimated plan against a test copy of the DB. It was a complicated SELECT statement with five joins, one view and six correlated sub-queries, each of which had multiple joins. The code originated from vendor software. Despite the complexity of the query, there were no smoking guns like an expensive index scan or key lookup. The most expensive part was accessing the data from the view which accounted for about half of the work when viewing its estimated query plan. The view consisted of just two tables. The row counts were not even particularly high – a few thousand here, a few hundred thousand there, and one large table containing 15 million rows. I ran the code for about ten minutes and stopped it. 

I figured I would begin my tuning process by focusing on the view since it seemed to account for half of the total cost in the estimated plan. I was concerned that there might be an issue with the view materializing correctly at run-time, so I wanted to temporarily replace it with the underlying select statement in the form of a common table expression (CTE) and join to that instead. I also found that a less than optimal index was being used by the optimizer within the view. I added an index hint in the FROM clause of the common table expression to force the use of a specific index which should make the CTE less expensive. The result was that the problematic query finished in about three minutes. I was excited. 

I requested that the revised query be executed against the production copy over the weekend since I didn't have access. I also requested that UPDATE STATS with FULLSCAN be run on the relevant tables to rule out any issues from out of date statistics. I came in on Monday with high expectations that everything worked fine over the weekend.  I was wrong. I was told the changes I made had done no good and that the query was terminated sometime Saturday afternoon. They did, however, refresh the lab database with the latest production backup.

I started looking again at the execution plan because I was still convinced the optimizer was not generating an efficient one. This time I charted out each table’s indexes with their keys and row counts to understand the structure and to validate that each join was using an indexed column. I also checked the selectivity of each index, which is done by counting the number of distinct values divided by the total row count. If an index isn’t particularly selective, it won’t be useful for the optimizer. An extra challenge with this type of tuning is that if the query never completes (or takes several hours to complete) it’s hard to evaluate any changes that you make. After reviewing the indexes used in all the joins, I again wanted to try improving the most expensive part of the plan, which was a hash operator accessing the CTE. I added an INNER LOOP join hint to the section of the query using the CTE. I used SET STATISTICS IO and PROFILE ON to capture the number of logical reads for each table to evaluate results, if it completed in a reasonable amount of time. The query finished in 8 seconds. 8 SECONDS! I figured my customer would be pleased. I ran the query several more times to make sure I wasn’t crazy. 

Thinking more about the change I made, the improvement had nothing to do with the conversion of the view to a CTE, the index hint in the CTE, or even changing the hash join to an inner loop join. The original hardware wasn’t the problem either, for that matter. This was an extreme case of the optimizer guessing "wrong" about how to best access the data in the database based on available indexes, statistics and database design. A side effect of using a join hint is that the optimizer will also force the join order of the tables as they appear in the query, which apparently was the optimal way to access the data for this query. I removed the changes I made, went back to using a view (even with the original sub-optimal index making it slightly more expensive than the one I wanted it to use) and simply added OPTION (FORCE ORDER) at the end of the query. It ran in record time in both test and production.

Now what about the intermediate run on the older test copy where I got it to complete in three minutes without forcing the join order? It no longer mattered so I didn’t spend any more time on it, which is actually a good thing to remember about prioritizing your day. I was, however, curious if using an index hint has the same effect as a join hint with respect to forcing the join order. After some playing around in a test database, I proved to myself that it does not – the optimizer will still join the tables in the order it thinks is optimal when using an index hint. 

Most DBAs know that a lot of performance problems can be caused by out of date statistics. Simply updating them can make the difference between a bad plan and a good one. In my experience, certain database problems can’t be fixed by simply updating statistics, at least when using the default sampling rate of 10%. In those cases, a more sophisticated approach has to be worked out for the optimizer to produce a good plan. Usually that means updating statistics with a higher sampling rate than 10% or some sort of code or index modification. The reason that it stopped working might not be known, although the average busy database server can be a very dynamic environment as workloads, server resources, code and data sizes change over time.

The optimizer’s job is to translate the query text into a form the engine can understand and determine how to best get the data. It does this by using the algorithms in play for the particular version of SQL Server and its cumulative updates. It’s usually excellent at this, considering what it needs to accomplish in a very small amount of time. However, when it is wrong, it can be off by orders of magnitude. In this case, it was about 1000x slower than the original 40 minutes and about 216,000x slower than the final query’s elapsed time.

What if the FORCE ORDER join hint didn’t solve the problem? I would then methodically go through each join, calculating selectivity and cardinality, and ensure that the minimum number of intermediate results was being passed from operation to operation and ensure that the right indexes were being used, in effect acting as my own optimizer. For additional information on “acting as your optimizer”, please see: SQL Tuning, Dan Tow, O'Reilly Media, Inc., Oct 23, 2007.      

Final Notes

Query hints are generally frowned upon and the decision to use one should be a last resort by experienced professionals. You might end up doing more harm than good. In this case, the use of the FORCE ORDER hint was warranted. The improvement in processing time was so dramatic, no one had to work the weekend baby-sitting the troublesome process or worse, abandon the code and resort to error-prone and labor intensive manual processing! The program manager was pleased.

I was able to get this resolved by not getting side-tracked on the “noise” (disk response time, maxdop settings, fragmentation, etc.) and going right to the execution plan since workloads like this rarely get massively worse from one run to another: that’s an orders of magnitude problem. And yes, hardware issues could explain that as well, but what I’ve seen over and over is that queries running differently by orders of magnitude will usually be due to a bad plan. The resolution of which will vary: implementing statistics maintenance jobs, increasing the default statistics sampling, the creation of a new index, or code modifications. In this case, it was a join hint that saved the day since updating statistics wasn’t enough to solve the problem.


4.61 (79)




4.61 (79)