Blog Post

Why I dislike Third Party Database Monitoring Tools

,

I have to admit I am not a fan of the various 3rd party monitoring tools (after using a few of them in the past 10 years) mostly because of the following two reasons:

1. Lack of operation productivity. What I see from various products is just a common pattern: alerts received –> a GUI opened –> a few clicks to drill down to the “root cause”

In an environment with a few hundred sql server instances, how can you expect a DBA to have multiple clicks to just figure ONE issue on ONE instance each time?

2. Big  TCO (Total Cost of Ownership) as most tools are priced on the number of sql server instances that the tool is monitoring. When you have a few hundred sql instances to monitor, the cost is usually prohibitive and can make a DBA feel bad that s/he cannot contribute to the company’s bottom line.

3. Compromising my capability to design a customized / flexible monitoring framework.

All current DBA monitoring tools are “closed” system, i.e. they generate data and then consume the data internally. For example, they can collect the CPU usage of a session, and then present the data via GUI or in various reports (with PDF, HTML etc format). However, there is no easy way for me to consume the raw data directly, say, if I run the same query in two different servers with different # of CPU, and I want to compare the CPU usage by assigning a factor parameter to one server’s data to level the competition ground, there is no way for me to do this with current monitoring tools.

In my observation, if a company has one or two capable DBAs, usually the company will not buy any 3rd party monitoring tools because in theory, all the functions from the monitoring tools can be made in-house and more than often, you do not need all functions from a monitoring tool (just like you do not use every feature of Windows Office, you probably only use 20% of the total features).

But I have to say all the monitoring tools have their merits and values, it is just the philosophy of making these tools that I probably do not agree with. To me, all these tools put overly unnecessary focus on GUI, the fancy reports, the multiple drill-down clicks to the “so-called” root of the issues and the complex configurations for different alerts etc, and all these mean the tools are more suitable for reporting to managers what  goes wrong instead of for DBAs to add extra values for the company. For those capable/responsible DBAs, they are more willing to create their own customized solutions instead of relying on a tool that only meets 20% of their targets while paying 100% price.

If I were an architect for a monitoring tool, I’d like to design a tool with only one goal

Put productivity as #1 priority for the tool’s existence.

With this principle in mind, I will provide a total library of functions for monitoring, and let the DBA (or better DBAA, i.e. DataBase Admininstration Architect) to develop the customized monitoring framework using the library. I will get rid of the GUI, and just provide a command-line interface (CLI).

I’d argue GUI is never a productive interface for repeating work (like database performance monitoring / analysis). If you look at Microsoft in server administration field, MS once won big praise for its GUI based administration (vs the traditional Unix style server administration), however, MS started to promote Powershell  in the last couple of years as both a tool and an interface to manage servers. Powershell in essence is a command line tool, which is efficient and easily scalable to handle big and complex environment. Why did MS adopt this approach? To me, it is quite possible that MS realizes that GUI is not efficient when you need to manage large scale of servers, and that’s why CLI is preferred.

I’d like to use an example here to illustrate why I think a CLI is better. I guess every SQL Server DBA should know about the famous sp_whoisactive tool created by Adam Machanic. To me, this is a perfect tool, which I can utilize in my total monitoring framework, esp. during the trouble-shooting period. I can use it in almost all scenarios, here are a few scenarios:

1. When my main monitoring thread finds tempdb data space usage is above the threshold, the main monitoring thread can call the sp_whoisactive (with customized parameters) and dump the info to a table, and another notification thread will read from the table, and send a well-formatted html email to me (or I can let the notification thread to dump info into SSRS service)

2. When my main monitoring thread finds high CPU / long blocking / long running sessions / high log space usage / … , the sp_whoisactive is called and the same subsequent steps will be followed.

Of course for a full scale monitoring tool library, we can add lots of features with different sets of functions (which can be implemented via CLR), such as disk space monitoring, memory/cpu monitoring, and many many more stuff that current GUI provides.

So in short, I want a 3rd party tool to be only a fact data provider whose consumer should be decided / created by DBAs. From a more abstract way, I’d argue if you want a tool that DBA will like, the tool should facilitate the DBA to add more values to the DBAs’ own work with their own imaginations (not to replace a DBA’s imagination), just like sp_whoisactive which is such a basic unit that I can easily embed it into my various monitoring frameworks without compromising my imagination to design the framework as a whole.

PS: My opinion may be biased as I have not researched all monitoring tools so I’d like to hear there are CLI functions in a monitoring tool. If so, please let me know and I will be more than glad to test / review it.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating