Blog Post

SQL Server – SET STATISTICS [IO|TIME] ON

,

My colleagues and I often joke around with “If it takes less than a second, then it’s okay!”. The reason for that is, that it is actually quite often we encounter pieces of t-sql code that does not perform good enough for what it will be used for when it goes live. Developers often optimize their queries, until the response comes “immediately” – and by that I mean within a blink of an eye. What often is the problem, is that such a query perhaps gives you a result within 200 ms which seems like the blink of an eye to the human eye. But if the server is a multi core server (which most servers are these days…), then the actual cpu time could be a lot higher. If the query is executed in parallel, and the server has 16 cores – then the query that returned in 200 ms could actually have used up 3,2 seconds of cpu time! And that is suddenly quite a lot!

So what can we do to know more about the query statistics during development? Luckily the answer is simple.

When you work in SSMS (SQL Server Management Studio) and have an open connection to a server, then you can simply enable statistics by executing this query once:

SET STATISTICS TIME ON

This setting is per connection, and it will stay on as long as your session is active, or until you explicit turn it off again. The setting will give you some info in the message pane, telling you how long it took to compile the execution plan, and how much cpu time was used to execute it. The result could be something like this:

SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 1 ms.
(100 row(s) affected)
 SQL Server Execution Times:
   CPU time = 93 ms,  elapsed time = 102 ms.

In this example we see that it took 93 ms of cpu time to execute the query with a total duration of 102 ms. The total duration is a sum of cpu time + wait time, where the wait time could be network latency, io latency etc. So in this case the query waited for some resources for 9 ms.

If the query was execution in parallel, then you could see cpu times larger than the elapsed time. I ran the test query on my dual core laptop with the following results:

SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 2 ms.
(100 row(s) affected)
 SQL Server Execution Times:
   CPU time = 141 ms,  elapsed time = 64 ms.

Now we see that it uses more cpu time than elapsed time, which indicates parallelism.

Another great piece of information, is the IO statistics per table. You can enable this IO statistics by executing:

SET STATISTICS IO ON

You can have both IO and TIME statistics enabled at the same time. If you execute a query that accesses multiple tables, you will get a list of all tables, and the total number of scans and logical read operations executed on that table. An example output could be:

(3 row(s) affected)
Table 'cmsDocument'. Scan count 1, logical reads 39, physical reads 0…
Table 'umbracoNode'. Scan count 3, logical reads 178, physical reads 0…
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0 …
Table 'cmsContentVersion'. Scan count 2, logical reads 4, physical reads 0 …
Table 'cmsContentType'. Scan count 1, logical reads 3, physical reads 0 …
Table 'cmsContent'. Scan count 1, logical reads 2, physical reads 0…

This is a very easy way to spot which tables are most IO heavy, and hence perhaps need indexing or rewriting.

So I can highly recommend remembering these SET STATISTICS queries when trying to performance tune a query, or if you simply would like to know a bit more about how well it performs.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating