Different query plans and elapsed times SQL 2012 and 2017

  • Hi,  I'm trying to understand why the query cost is higher on a SQL 2012 instance but the elapsed time is shorter.

    We have two VMs both are 4 core 16 gb ram.
    One vm is SQL Server 2012 newest SP on Windows Server 2012 R2
    One vm is SQL Server 2017 newest CU on Windows Server 2016

    I have copied the Adventureworks2012 database to both servers.  Set compatibility mode 110 for the 2012 server and compatibility mode 140 for the 2017 server.

    Running this Adventureworks stored proc on both
    exec [dbo].[uspGetEmployeeManagers] @BusinessEntityID = 8
    Returns 3 rows

    The query plan is different between the two. 

    The 2012 server estimates 57 rows with a subtree cost of 0.11 and completes in 4ms
    The 2017 server estimates 3   rows with a subtree cost of 0.05 and completed in 19ms

    2012 statistics io and time:
    Table 'Worktable'. Scan count 2, logical reads 25, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Person'. Scan count 0, logical reads 24, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Employee'. Scan count 8, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:
    CPU time = 0 ms, elapsed time = 4 ms.

    2017 statistics io and time:
    Table 'Worktable'. Scan count 2, logical reads 25, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Person'. Scan count 0, logical reads 21, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Employee'. Scan count 5, logical reads 46, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:
    CPU time = 16 ms, elapsed time = 19 ms.

  • With SQL Server 2014 Microsoft changed the way they do Cardinality Estimation and with a trace flag allowed the use of the previous Cardinality estimator, this would explain the differences in query plans and estimates.  Here is an article at Microsoft about it.  There are cases where older (legacy) estimator creates better plans and cases where the newer one creates better plans.  Looks like this is a case where the older estimator is better.  I would also argue that, unless run 1000's of times per minute a difference measured in milliseconds probably isn't enough evidence to go back to the legacy estimator across the board.

  • Thank you Jack. It confuses me that the row estimate is more accurate and the subtree cost is lower in 2017 but the elapsed time is longer.

    The reason this came up is that we will be migrating databases from SQL Server 2012 to SQL Server 2017 so I was running the SQLQueryStress tool on the old and new servers.  10,000 iterations of the stored proc showed a consistent difference in "CPU seconds/iteration" and "logical reads/iteration" between the two servers.

    Looks like the cardinality estimator change accounts for the difference in query plan. 

    Howard

    ps:  good news is that when testing with  OPTION (USE HINT ( 'FORCE_LEGACY_CARDINALITY_ESTIMATION' ) the query plans are identical on the two servers and so is the elapsed time.

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply