Blog Post

Estimated rows, actual rows and execution count

,

It’s often said that a major discrepancy between estimated and actual row counts in a query’s execution plan is a sign of inaccurate statistics or a poor cardinality estimate and that it’s a sign of a problem. This is generally true, however there are places where the estimates and actual rows will differ, often quite dramatically, without it been a problem. The reason for this is that these two values show slightly different things.

Let’s take a look at an example. (table creation code at the end of the post)

select bt.id, bt.SomeColumn, st.SomeArbDate
from dbo.BigTable bt
inner join dbo.SmallerTable st on bt.SomeColumn = st.LookupColumn
where bt.id between 5000 and 5100

Estimated Actual discrepency

Estimated rows = 1, actual rows = 101. That’s a large discrepancy, but what caused it? It’s not out of date statistics (a usual cause) because the table has only just been created, so why is the estimation so far from the actual.

Let’s take a closer look at that seek. The seek predicate is an equality match on LookupColumn. That can only return 1 row because when the table was populated it was populated with unique values for that column (though the index on that column is not defined unique). So the estimated row count is dead-on, the index seek will return  a single row. Now the question is where that 101 for the actual comes from.

This seek is on the inner table of a nested loop join. The way the nested loop join works is to query the outer table of the join and then to query the inner table once for each row returned by the outer table. A look at the details of the clustered index scan that defines the outer table of the nested loop shows that it returns 101 rows.

OuterTable

Since the outer table returns 101 rows, the seek on the inner table must be done 101 times. That’s supported by the execution count shown on the inner seek. That’s where the discrepancy between actual and estimated rows comes from.

When an operator is executed multiple times as part of the query execution, the estimated row count refers to the number of rows that the optimiser estimates will be affected per execution. The actual row count refers to the total number of rows that the operator affected, cumulative over all executions. So when checking to see if there’s a major discrepancy between estimated and actual rows counts, the actual row count has to be divided by the number of executions.

That’s fine when using SQL 2008’s management studio, which exposes the execution count of an operator in the tooltip of that operator. SQL 2005’s management studio did not display the execution count anywhere convenient, though it is present in the XML of the plan. This is purely a feature of the version of Management Studio. The SQL 2008 management studio will display the execution count regardless of whether it’s connected to SQL 2005 or to SQL 2008.

For those still using SQL 2005’s tools, if you want the execution count, this is where to look:

<RelOp AvgRowSize="15" EstimateCPU="0.0001581" EstimateIO="0.003125" EstimateRebinds="85.432" EstimateRewinds="0" EstimateRows="1" 
LogicalOp="Index Seek" NodeId="2" Parallel="false" PhysicalOp="Index Seek" EstimatedTotalSubtreeCost="0.0480212" TableCardinality="3956"> <OutputList> <ColumnReference Database="[Testing]" Schema="[dbo]" Table="[SmallerTable]" Alias="[st]" Column="SomeArbDate" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="101" ActualEndOfScans="101" ActualExecutions="101" /> </RunTimeInformation>

The number of executions, along with the actual row count is contained within the XML node RunTimeInformation. Obviously this node will not be present when looking at an estimated execution plan or an execution plan retrieved from the plan cache, as neither contains any run-time information.

Table creation code.

Edit: In the original post I left 2 indexes out of the table creation, which changed the behaviour of the query completely (hash joins instead of nested loop). If you tried to reproduce my results and couldn’t, you should be able to now with the correct indexes.

Create Table BigTable (
id int identity primary key,
SomeColumn char(4),
Filler char(100)
)
Create Table SmallerTable (
id int identity primary key,
LookupColumn char(4),
SomeArbDate Datetime default getdate()
)
INSERT INTO BigTable (SomeColumn)
SELECT top 250000
char(65+FLOOR(RAND(a.column_id *5645 + b.object_id)*10)) + char(65+FLOOR(RAND(b.column_id *3784 + b.object_id)*12)) +
char(65+FLOOR(RAND(b.column_id *6841 + a.object_id)*12)) + char(65+FLOOR(RAND(a.column_id *7544 + b.object_id)*8))
from master.sys.columns a cross join master.sys.columns b
INSERT INTO SmallerTable (LookupColumn)
SELECT DISTINCT SomeColumn
FROM BigTable TABLESAMPLE (25 PERCENT)
GO
CREATE NONCLUSTERED INDEX [idx_BigTable_SomeColumn]
  ON [dbo].[BigTable] ([SomeColumn] ASC)
GO
CREATE NONCLUSTERED INDEX [idx_SmallerTable_LookupColumn]
  ON [dbo].[SmallerTable] ([LookupColumn] ASC)
  INCLUDE ( [SomeArbDate])
GO
From    Subject    Received    Size

Gideon van Zyl – XE    C-Track info/docs 4 SAPS    15:53    24 KB

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating