Our table design is now 15 years old, originally coded against SQL Server 6.5, but currently running on SQL Server 2012 Standard Edition on numerous physical sites. We had put a status field, char(1) for example, in the customer order header and detail fields to denote whether an order header or line was Active, Complete, Cancelled, etc. This field has a nonclustered index on it, but has a very skewed data distribution, with very large numbers of Completes, a much smaller number of Cancelled, and a relatively tiny number of Active status. Many of our queries revolve around looking for customers’ orders that are active, and extracting some other data relating to those orders.
select some fields
where co_detail_status = ‘A’
As the years have rolled on and our application has grown massively, we now have large numbers of records in the orders tables. These tables also have many fields but very small numbers of rows have a co_detail_status = ‘A’. This usually makes SQL Server use bookmark lookups on the data and is always more efficient on these types of queries than table scans in our system. However by the time you have thrown in a few joins to other tables with _status = ‘A’ joins to other tables and other constructs relating to split deliveries and other factors, some queries were not running as sweetly as I would like.
Over the years I usually have a look at queries every month that require more than 10,000 – 100,000 reads and run frequently (or take a significant amount of time to run). Many of these queries involved looking at _status = ‘A’ in some shape or form. It was clear that I would need to do something in the future to keep performance levels consistent.
I started thinking about archiving by moving the ‘Completed’ data to its own table or database, as I am sure many of you have discussed and implemented over the years. However this would involve a fair amount of work or re-coding one way or the other. I kept thinking and at the back of my mind what I really needed was just to filter the Active status records into their own (small) table and the new SQL Server 2008 filtered index feature seemed to hold promise.
Then I hit on the idea of creating ‘virtual tables’ based on a filtered index of _status = ‘A’ but including all the other fields in the table in the index, guaranteeing that the query would be able to get any associated row data without having to go elsewhere. My first thoughts was that it did not feel right to include every other column in the index but I tried it. This would create in effect a virtual table of just the active order status.
Because there are very few rows with this status, the index was not large and the overhead of maintaining these indexes was not huge as relatively few orders change status every day. After some testing I applied this technique to a handful of tables and it made a very significant reduction in IO over a wide range of queries, old and new, with no code changes.
Even better is that fact that it works on SQL Server Standard Edition. Standard Edition uses filtered indexes automatically, unlike indexed views and the unavailable partitioning feature. The optimiser knows it can get all the data from this filtered index and does indeed use it with very high consistency. We previously found the odd random complex query that would revert erroneously to a much slower table scan, usually fixed with a manual statistics update. This query now always selects the filtered index without fail. The only extra maintenance task is to delete and recreate the index every time a new field is added to one of the tables in question to make sure it is in the included list.
I am not saying that this is a good solution to every problem or indeed many problems, but it is worthy of adding the technique to your toolkit as it may prove beneficial under the right circumstances. Make sure you test any changes thoroughly as this technique misused could create many more problems than it solves.