SQLServerCentral Article

SQL Server Alerts

,

Introduction

SQL Server alerts provide an elegant administrative tool to automatically notify pre-designated operators of server event occurrences. Organizations benefit quickly, as alerts make their DBAs more proactive to conditions requiring their attention.

You can enable automation of alert notification on the SQL Server 2000 platform using the SQLServerAgent service in Windows. This service scans through SQL Server event logs and compares each entry with the conditions specified by the alert specification. The service fires off an alert when it finds a match.

What is an Alert?

So, what is an alert? In SQL Server Books Online it is defined as “Errors and messages, or events, generated by Microsoft® SQL Server™ and entered into the Microsoft Windows® application log. SQL Server Agent reads the application log and compares events to alerts that you have defined. When SQL Server Agent finds a match, it fires an alert.”

In general, a database administrator can not control the events occurrence, but can control the response to those events with alerts instead. So using alerts is the way of automatic database server administration. Usually alerts are used for the following task types:

  • Fatal hardware or software errors
  • Abnormal termination of SQL Server
  • High CPU utilization
  • Syntax error in user transaction
  • Other System control tasks

A reaction for the alert can perform one (or several) 

actions:

  • Execute a job
  • Send an e-mail message
  • Send a notification to a pager
  • Notify a networked user

Let me try to define some other reasons for using alerts from DBA and database developer point of view for the process and sometimes for the application development.

In a lot of cases when a developer specifies a data transfer process or data transfer application inside a database, it is a good idea to utilize the full power of the SQL Server commands. For example: commands such BCP, truncate, usage of xp_cmdshell, and many others require more privileges than any regular user can have. And a lot of processes can be limited by user’s permission settings.

But some of those processes could be triggered by users running batch jobs or by users using a GUI application.

In the company I workfor we have multiple scheduled data transfer processes. Some of them are triggered by the users in an emergency or when a user defines specific conditions and should rerun process as of NOW. As a DBA I can not give the users an ability to run a process directly because, in most cases, the process requires full administrative rights. As you can see, alerts make for a good way to separate a task from the user and achieve a high level of operational security (See diagram 1)

It means that the job is running under SQL Server Agent account login and will have all the necessary privileges. There are a lot of uses for this type of processing. Besides the example described above, this same idea was used for one of my clients to load daily and weekly delivered files by data center personal. The front end was only used to trigger the process and subsequently show entries from the log file, refreshing a GUI screen every 20-30 seconds. The job was setup to be started based on an error in a specific database. A stored procedure was created to raise the specified error. The only privilege a user needed was to execute 2 stored procedures: one to raise an error and another to retrieve records from the log table. This way we have avoided any security compromises.

I recently faced a different kind of task. Usually all data transfer tasks were done by a time scheduling mechanism for the output processes to run. But one of the clients requested the data changes be submitted to them within 5 minutes after the change is complete via an FTP transfer. And not every change, but only certain ones based on some business logic. There were a few solutions to the problem:

  1. Setup a process to run every 5 minutes, which will check the data.
  2. Use a trigger for the process

The problem with solution 1 is that data changes may happen infrequently but the process verification will still be running every 5 minutes and driving CPU utilization up. Plus, if a client has multiple databases or a DBA needs to setup many similar processes then it would create many processes running every 5 minutes. Even if some of the processes will be combined together sequentially within one job, their separation should be considered when the running time for multiple sequential processes is longer than 5 minutes. 

Solution 2 requires data changes depending on the successful ending of the data transfer process that making elapsed time for the data change much longer. And practically, those 2 processes should not depend on each other even they have logical dependency.

A decision was made to use a combination of triggers and alerts. Triggers will allow saving the history of the change in a table. The table can be a temporary permanent table that will be cleaned up every time either before or after the process is run. Or it can

keep historical data permanently. I choose the second way. The process logic is on the diagram 2.

Part 1 (top portion) shows the combination of 2 triggers for the process separation. Trigger 1 outputs data changes into permanent history table, making a limited number of logical decisions based on business rules. All this information becomes written in a table with historical data. If this table is generic for all similar processes for multiple databases then it can be in a separate centralized database. The history table required only a trigger for inserts. That trigger raises the alert number based on metadata for the row in the history table. The alert, in turn, starts an independent process represented by the middle part of the diagram( Part 2). Part 3, the bottom portion, is responsible for the FTP process.

As you can see the processes become separated, running independently, and the alert provides complete security for the data transfer process allowing use of SQL Server and Windows system specific functions and procedures without giving any special privileges to a user. To be able logically connect the parts of the process I used a standard name convention for every object based on an error number, while physically I can keep the parts in different databases and on different servers . For example, if I assign error number for the process 888888888 then alert name will include the number and can be alert_888888888. The job name job_888888888 and so on. Every specific object (including stored procedures) related to the process will have the error number as a part of a name.

Lets try to see the simple implementation of the process. It requires:

  • trigger for a table
  • Job with 1 step using a stored procedure that will be fired by an alert
  • User error. It will fire the alert
  • Alert. It will start a job with stored procedure

Assume that we have a table CUSTOMER.

Create table customer (customer_id int, customer_nm varchar(50), customer_addr varchar(50))

Trigger for the process will be as follow.

Create trigger TR_CUSTOMER_IU
on CUSTOMER
FOR INSERT, UPDATE
as
begin
    raiserror (888888888,10,1)
end

Stored procedure p_email will send email when executed by a job.

Create procedure p_email
   @recipients varchar(255)
as
begin  
  EXEC master..xp_sendmail
      @recipients = @recipients,
      @message = 'Hello world!',
      @subject = 'Process Test'
end

Now, if you create job job_888888888 with the step step_1 using stored procedure p_email, error 888888888, alert alert_888888888, tide the error with the alert, and schedule job based on alert_888888888 each time trigger is fired email will be sent.

In reality the whole process is much more complicated but it is using the idea described above.

Conclusion

By using alerts for the client processing (not a system monitoring) better security with ability to use full SQL Server power and physical separation between two logically connected but physically independent processes can be achieved. This article shows only general design ideas that I was using for multiple processes very successfully. The actual implementation may vary based on each individual process scenario.

Rate

5 (4)

You rated this post out of 5. Change rating

Share

Share

Rate

5 (4)

You rated this post out of 5. Change rating