Blog Post

Two simple commands that can be a big help in performance tuning.

,

The first thing that always comes to mind when discussing performance tuning is query plans and rightly so. They are the best information about what a query is doing and so how to improve it. However there are a couple of little commands that can be a big help too. SET STATISTICS TIME ON and SET STATISTICS IO ON can give you some quick information about the performance of a query that can in its own way be a huge help.

 

SET STATISTICS TIME ON

When trying to tune a query it’s frequently helpful to know precisely how long the query took. When you SET STATISTICS TIME ON SQL will return the CPU and elapsed time spent on parsing, compiling and executing the query. You might think “Oh big deal, I can see how long my query took in the bottom right hand corner of the window.” Well yes, but that’s in seconds and this is in milliseconds. Not a big deal if you are tuning down from 1 hr but if you are starting at 2 seconds it’s pretty helpful. And one last benefit, because the time is printed to the message pane, if you are running time trials you can easily copy and paste these results to a text file.

SQL Server parse and compile time: 
   CPU time = 0 ms, elapsed time = 0 ms.
(5 row(s) affected)
 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 55 ms.

 

SET STATISTICS IO ON

The output of this simple command is possibly one of the most useful tuning tools I’ve used (other than query plans of course). This command simply displays the IO used. This is particularly useful since IO tends to be one of the biggest bottlenecks in any query. Take a look at the output of the view AdventureWorks2012.Sales.vStoreWithContacts.

SET STATISTICS IO ON
SELECT * FROM Sales.vStoreWithContacts
(753 row(s) affected)
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PersonPhone'. Scan count 753, logical reads 1630, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'EmailAddress'. Scan count 753, logical reads 1635, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Person'. Scan count 0, logical reads 2315, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'BusinessEntityContact'. Scan count 20, logical reads 43, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'ContactType'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Store'. Scan count 1, logical reads 103, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PhoneNumberType'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

If you look you can see the 7 tables in the view and a worktable. For each of these there is the following:

  • Scan Count – Number of seeks/scans started after reaching the leaf level
  • logical reads – Number of pages read from the cache
  • physical reads – Number of pages read from the disk
  • read-ahead reads – Number of pages placed into cache for this query
  • lob logical reads – Same as above but for lob pages
  • lob physical reads – Same as above but for lob pages
  • lob read-ahead reads – Same as above but for lob pages

 

So how does this help? Typically I’ll look for the table with the highest scans and or logical + physical reads. You never want to rely on just logical or physical since that is totally dependent on what is and is not in cache. Once I’ve identified a table with a large number of reads/scans I can target that table for a closer look. Typically I check these tables to see if additional indexes will help or if the way it’s used in the query could be changed. If I were tuning this particular view I would start with Person, PersonPhone, and EmailAddress. Note that the schema is not listed as part of the output so you will need to look at that up from the query. In this particular case the view & table structures are well designed (it is AdventureWorks after all) so there isn’t really much to do.

 

But Ken the output is rather confusing.

If you find this output a bit daunting/confusing Richie Rump(b/t) has created http://statisticsparser.com/ to help us out. Simply paste the output into the box provided and hit the parse button.

StatisticsParser1

And you get an easy to read table that looks like this:

StatisticsParser2

Both TIME & IO outputs placed into nice easy to read tables. For the IO Richie even included the % of the total IO that each table took up.

 

Summary

Remember that the query plan will give you a great deal more information so you don’t want to neglect it. But these two short commands will give you some very helpful information and are quite a bit easier to read. I find them a good place to start and if nothing obvious turns up then I turn to the query plan for additional help. You will also want to keep a copy of the STATISTICS output from before and after any tuning efforts so you can get an accurate view of any improvements you’ve made.

Filed under: Microsoft SQL Server, Performance, SQLServerPedia Syndication, System Functions and Stored Procedures, T-SQL Tagged: code language, language sql, microsoft sql server, Performance, T-SQL

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating