Blog Post

Querying the Procedural Cache: Just in Time for the New Years’

,


I sincerely wish that everyone enjoyed their recent holidays and recent downtime with family, and pass on my hope for peace, happiness and health for 2013.

The goal of this post is to aid in the understanding of the procedure cache and execution plans to ensure we use fewer resources, with the desire for database queries to run optimally.  Better means higher throughput, more concurrency and fewer resources – as described by SQL Server MVP Joe Webb during a session at SQLTeach 2009 in Vancouver (my original hometown, photos from/of new convention centre below), where I first took a serious look at this subject. Of course, this is simply a quick blog post, so for those of you desiring far greater detail, please see Grant Fritchey’s 2nd edition, and FREE e-book on Execution Plans.

Vancouver's New Convention Centre (Dexigner.com)










From the result set tabs provided from the query below (thanks to the Dynamic Management View Exec Query Stats), the last column on the right produced by querying the procedural cache, provides a graphical text-based, or XML-based representation for the data retrieval methods chosen by the Query Optimiser. In understanding the execution plan, we read it from right to left, playing close attention to the relative execution costs (if high) for specific statements that are displayed: such as physical operation, logical operation, i/o cost, cpu cost, rows, and row size. For easier readability, zooming in and out are functionalities available too, as well as the properties window,which I usually have hidden unfortunately...and forget about (to my own disappointment).

-- view all the recent queries, top fifty in this case
SELECT TOP 50 DB_Name(qp.dbid) as [Database] , qp.number , qt.text as [queryText],qs.total_logical_reads as [Reads],  SUBSTRING(qt.text, (qs.statement_start_offset/2) + 1,

    ((CASE statement_end_offset

        WHEN -1
        THEN DATALENGTH(qt.text)

ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) as [StatementText],qp.query_plan

FROM sys.dm_exec_query_stats as qs

      CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) as qp

      CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt

ORDER BY qs.total_logical_reads DESC

There may be a thousand ways to resolve a query, or to the find one that is good enough to return the results in the most efficient way – this is what the query optimiser (QO) does. The QO uses a cost-based algorithm which estimates the best way for the query to run as it passed from relational engine, before in binary to the storage engine.  Before it leaves the relational engine of SQL Server, it processes the query and may change the plan, or re-estimate what the execution plan would be.

For example, as part of your stored procedure, you are doing recurrent updates, and then perhaps you may have changed up to 50% of your rows, and thus the table statistics have changed consecutively. Erstwhile, the plan that created originally has changed due to the statistics. Maybe now there is a better index to use or foundation has changed to the point where the stored procedure needs to recompile. In reality, it’s best to make your decisions on the Actual plan, and not the estimated plan – to see what I mean, in SQL Server Management Studio click Query, and select Include Actual Execution Plan, or, better yet, to see the full before/after story compare with select Query, Display Estimated Plan.

You will want to watch out for clustered index scans, since they are synonymous with a Table scan, and that is to be avoided or indicates there is a problem with the query.  
If you see a table scan, change the heap to a clustered index, or add an index to the specific column necessary - clustered index seeks are a good sign optimisation wise.  If the tables are of an insignificant size however, don't bother splitting hairs. 

An interesting way to do trial and error is to make different versions of the query and let it give you estimated/actual plans for all the batches and to compare each iteration of the result set.  Furthermore, to quickly read the execution plans findings, the tooltips are a lot better and give way more details since 2008, which even tells you now, correlating with an index-related DMV, if an index needs to be created.




BTW - Alternate methods to produce execution plans are by using: Profiler, performance dashboard and 2008`s data collector.


Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating