Blog Post

Freaky statistics feature

,

SQL Server statistics can make things a bit freaky. You might have read one of my previous articles portraying a specific performance issue caused by stale statistics – No thrills about spills.

In this article I will talk a bit about the distribution steps in statistics and something that may possibly freak you out.

I recently looked at the following statistics in the StackOverflow database:

DBCC SHOW_STATISTICS('Posts', PK_POSTS)

And the result was as follows:

post193_statistics

WHAT?!?!

Where are the 200 distribution steps? Surely, this can’t be right? I have just updated these statistics as you can see from the first row of the results. There are 21736594 rows in the table and all were sampled.

But I only have 7 steps!

What’s going on here?

There is an answer to this.

SQL Server always creates the distribution histogram with 200 distinct steps initially, if there are at least 200 rows in the table. But just before completing this statistics creation process, it consolidates the adjacent steps if the AVG_RANGE_ROWS are the same and the RANGE_HIGH_KEY of the next step is a continuation from the current step i.e. if combining the two steps does not change the AVG_RANGE_ROWS value. As the estimates would be the same, SQL Server would waste less space by combining the rows together when storing the statistics. Moreover, it’s probably more efficient for SQL Server to use the compressed statistics.

Looking at the statistics above, during the creation process the steps could potentially have been something like this:

post193_statistics2

SQL Server is able to determine that consolidating rows 4, 5 and 6 would not make any difference to estimates.

For example, if we run the following query:

SELECT * 
FROM Posts 
WHERE Id = 500001

Is there any difference in the estimate value using either of the histograms?

The answer is NO. Hence, these steps are combined by SQL Server.

This is probably applicable to:

1) Columns that are unique and sequential e.g. identity columns

2) Sequential primary keys

3) Columns where data is very much evenly distributed

I’m sure there are other cases this would apply to.

Noticeably, though, SQL Server compresses a lot more steps when statistics are updated with fullscan. And rightly so, because it has a better picture of the statistics.

I hope you find this useful, but if you see this scenario, DON’T PANIC!

The post Freaky statistics feature appeared first on SQLYSE.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating