SQL Monitor – Using the performance data smartly

How does a Database Consultant use a tool such as SQL Monitor? The seasoned expert doesn't just dive into DMVs before considering baseline information, application stress and the interrelation of different factors over time, to understand what's changed. It is the art of putting performance data in context.

How to do better tuning by understanding the context of your performance data

Red Gate’s SQL Monitor has now become a great way of making basic monitoring easy for a whole range of DBAs irrespective of their willingness to get immersed in coding. This just works, and can be accessed on any PC or mobile device. It now doesn’t even need you to install a central monitoring server, since Red Gate can do this for you, security-permitting. In this article I would like to discuss five particular areas of Red Gate’s SQL Monitor 3.2, which have saved me a lot of time and effort in my day-to-day work.

These are the 5 areas:

  •  Installation and setup
  •  Adding server and instance
  •  Performance metrics (contexts and perspectives)
  •  Trace information
  •  Custom queries

Fast installation and setup

I was impressed by the effortless setup of the product. It literally took only a few minutes from the time I double-clicked the installation package to the moment I saw some performance data coming in. This makes me think that the product was developed with the ‘sense of urgency’ in mind. People rarely install performance-monitoring products on a very quiet day; instead we all tend to start monitoring performance in the heat of the moment, when we are engaged in tracking a problem and we want the information in a minute or two. From this point of view, the SQL Monitor has already scored valuable points in my notebook.

 Another great aspect of the installation is that the product does not require the installation of local clients on all the monitored servers, and this helped tilt the scale when trying to convince the IT security department that it is a good idea to use this product.

Adding server and instance

Installation and setup are easy, and the job of adding servers and instances to monitor is even easier. All you need is a server / instance name and authentication credentials. It works very well with clustered SQL Server instances, and it gives valuable information about the host and the instance levels.

There is just one aspect of monitoring a cluster that I miss, though: I would like to know which host my SQL Server instance is currently running on in the case of the active-active clustering model. I would also like to be able to configure an alert which would notify me and keep the history of the instance failovers. After all, as a DBA, I would like to know when the failover occurred, and what the performance conditions were that lead up to it. Was it, for example, a very high CPU making the server non-responsive? Was it, alternatively, a NIC failure? Was it a manual failover? Maybe this feature will be included in the next version.

Overall, though, the task of adding a server (host machine) and a SQL Server instance for monitoring in SQL Monitor is almost effortless, and the performance information is gathered instantaneously. This makes the performance tuning easier and less tedious, especially when the performance data is put in the different levels of the instance, host and server-farm context.

Various performance metrics at your fingertips

As well as showing plenty of information at a glance, there are several ‘undercover’ features which can save a great deal of time. One of my favorite ‘undercover’ features of SQL Monitor is the smart correlation between the performance data in the setup of the alerts so that you can then see a graph of the event in context.

Let’s take the example of the High Processor Utilization alert. Just by clicking on the alert we already have information about the correlated events which occurred at the time of the event triggering the alert. And these are not merely a pile of correlated events, but they are smartly grouped in categories: Host Machine, SQL Server, System processes, SQL / Profiler data, Top 10 expensive queries.

There is nothing better than putting the performance data in context. By providing a graphical overview of the context of an event, we can easily put the event in perspective and even manage notes related to it for further investigation or delegation. For example, let’s imagine that I have set up an alert to notify me about blocking chains in my system and I’m getting an alert at 3pm every day about a long lasting blocking chain in my system. With SQL Monitor I can easily follow up on the re-occurrence pattern of these alerts, and if I have other tasks prioritized at the moment I can add a note for the event and forward the investigation to another DBA.

Another smart thing is the fact that SQL Monitor provides historical data for the occurrence of alerts. And of course, each of the occurrences can be clicked on, and easily put in the context of the performance data at the time of occurrence. This makes it easier to resolve problems. Without it, performance data would not be readily available, and each DBA would have to do their ‘fishing and hunting’ on their own, with the worst part being that the data collection would be done post-factum.

In conclusion, it is impressively easy to put things in context and even in perspective with SQL Monitor.

Event trace information is available, on demand

There is an option in SQL Monitor to gather SQL Profiler data, which proves very useful for investigating known problems in detail.

SQL Profiler is known for having a definite performance impact on the servers, so it is handy that SQL Monitor can get you the SQL Trace data only when you require it, ‘on demand’. It would be even better, however if the trace sessions could be scheduled to start and stop at certain times, since I am rarely awake at 2am to start SQL Profiler to debug specific query problems.

This would mean that unattended sessions of the profiler could be started and stopped: Later on, the data could be correlated with other performance data and put in the context of an alert. Of course, it would be possible to start and stop the SQL Profiler using scripts and the scheduler, but it would take time and effort to test and set up. It would also take time to gather and to analyze the data. In SQL Monitor it is almost effortless.

However, considering that Profiler is a deprecated feature in SQL 2012, I would hope that very soon in SQL Monitor there will be an option to work with Extended Events.

It would be great to have this feature, despite the fact that Extended Events are not available in SQL 2005 and are clumsy in SQL 2008. But since we are moving towards the SQL 2012, I think that Extended Events integration could become an essential part of SQL Monitor.

Custom queries!

As a final piece of the puzzle, there is a facility to add your own custom queries. It is an advanced feature, of course, which means that it was provided for experienced DBAs to create custom metrics, whereas anyone can subsequently view the metric. Even where organizations are without experienced DBAs, the custom metric needs only be set up once and then the alerts will flow on their own.

The custom queries can be defined on demand, and they can be related to an alert. These custom events can relate to events within applications, such as purchases in an online ‘shop’, or commodity trades. This way the administrators can tailor their own monitoring routines according to a specific environment.

Even if your organization does not have a seasoned DBA to define and setup the custom metrics, you still have several options:

or

  •  Use the services of a seasoned DBA who can help you setup your own custom metrics

Real world example:

As a consultant I have often worked with companies which did not have any DBAs. They did not need to have a DBA because they would spend a one-time sum on SQL Monitor, and then the general IT personnel would handle the alerts setup and monitoring in SQL Monitor.

And when an alert would occur they would follow the history of it, and if it was a brand new event, they would send the event context information to me (the consultant DBA), and after looking at it, I would recommend a solution (either tuning, new hardware or just a specific one-time task to overcome the problem).

As a bottom line – the customer saves money on calling me (the consultant DBA) more than once for the same problem; when they call me once, they expect me to diagnose the problem and to put a note in SQL Monitor what action is to be taken the next time it happens.

Conclusion:

SQL Monitor is a great tool if you need help putting your performance data in context, and who doesn’t need that! With its effortless installation and setup and with its built-in and custom performance metrics it is a very valuable tool in the arsenal of a DBA nowadays.