Introduction
Default traces, a lightweight built in monitoring provision in SQL Server, can be used in configuring audit for DDL events and other key changes in SQL Server. The following article explains how default traces when combined with a PowerShell script can be used in auditing and also compares the solution with few other existing methods available.
Problem Statement
One of the major challenges faced by a database administrator is having an auditing mechanism that doesn’t have an impact on the performance of the database and also captures key events including DDL events that keeps the auditors happy.
Typically, a list of expectations from an auditing solution would be:
- Auditing to capture DDL Events (structure and schema changes) as well as configuration changes to both the server instance and database.
- Needs to be non-intrusive with no, or negligible, performance impact
- Should be an asynchronous audit with no dependency or impact to the application. In other words, a failure in auditing shouldn’t cause application transactions to fail.
- Should be capable of working for all editions of SQL Server ie., Enterprise, Standard or even Express editions and SQL Server 2005 onwards
- Should not be easily edited / tampered by anyone including the DBA.
There are a few options available in SQL Server. Each of these is discussed below in terms of the requirements above.
SQL Audit
SQL Audit works well on almost all the expectations stated above, except item 4. SQL Audit is available only on Enterprise Editions of SQL Server 2008 and above.
DDL Triggers
Though capable of capturing DDL events and easy to configure, DDL Triggers fail our expectation for the following items:
- DDL trigger is not synchronous as it becomes part of the application’s transaction. A failure to audit, can cause the application transaction to fail.
- Audit logs written to a table can be tampered easily by the DBA by manual delete statements and the activity can go unnoticed.
- Trigger can have performance implications if the DDL Trigger is poorly written.
SQL Server Default Trace
By far the best option available as it meets all the recommendations
- Configured and enabled by default on SQL Server installations and requires no effort to set up. One just needs to know how to make use of it for audit.
- Captures DDL Changes, Server and database configuration changes. Also captures few performance related information as well, though it is not part of the expectation
- Has no performance impact on the database server and application.
- Doesn’t become a part of application’s transaction or scope. Default trace is run by SQL Server’s system processes and doesn’t have any impact to application.
- Works on all editions of SQL Server including express editions
- Default trace generates trace files which cannot be tampered by DBA.
A Brief Introduction to the Default Trace
The Default Trace tracks several key events like DDL Changes, password changes, server configuration changes, database setting changes, data file growth, log file growth, and so on. However, the trace files can have a maximum size of 20MB and only 5 files are retained by SQL Server. After 5 files are generated, SQL Server reuses the earliest file generated. The 20 MB size limitation and 5 file limit are set by SQL Server and can’t be changed.
Here is the list of steps involved in implementing the solution using default trace.
- Ensure default trace is up and running
- Deploy the power shell script provided in this article to copy the default trace files before they are overwritten
- Schedule the PowerShell (PoSh) script via SQL Agent. Pick a schedule whose frequency would copy the file before it gets overwritten
The PoSh script used for copying the trace files and guidelines for picking the SQL Agent schedule are explained in detail in the following sections.
PowerShell Script
The PoSh script that is used to copy the trace files provided below.This is a fairly simple script, where one needs to provide the IP address of the server to be monitored in the connection string. The variable “$destination_folder” should be to set to location to which the trace files are to be copied.
The Command "Get-ChildItem" fetches the list of trace files which were written in the last 1 hour in the default trace folder loaction. The filter "_.LastWritetime -gt (Get-Date).Addminutes(-60) " on "Get-ChildItem" command allows one to achieve the same.The Command "Get-ChildItem" also ignores the trace file that is currently written by the default trace. The "Copy-Item" command copies the list of trace files filtered by "Get-ChildItem" command.
The trace files are usually copied to a location from which one needs to generate the audit report. By copying the default trace files before they are overwritten, one can retain the audit records captured by default trace for DDL events and other key events. The trace files copied can be used to generate audit reports.
$writeconn = New-Object System.Data.OleDb.OleDbConnection [string]$connstr="provider=sqloledb;data source=<IP Address>;initial catalog=master;integrated security=SSPI;" $writeconn.connectionstring = $connstr $writeconn.open() $writecmd = New-Object system.Data.OleDb.OleDbCommand $writecmd.connection = $writeconn $writecmd.commandtext = "select convert(varchar(1000),path) as path from sys.traces where is_default = 1" $reader= $writecmd.executereader() While($reader.read()) { $current_file = $reader.getvalue(0) } $reader.close() $writeconn.close() $root = $current_file.Directory $destination_folder = 'X:\Audit_Trace\' Write-Host "$root" Get-ChildItem $root -Filter *.trc | Where-Object {$_.LastWritetime -gt (Get-Date).Addminutes(-60) -and $_.Fullname -ne $current_file } | ? { Copy-Item $_.Fullname $destination_folder }
Picking the schedule
One key task in this method is to pick the right schedule in SQL Agent. Picking the right schedule ensures that one copies the trace files before they are recycled and overwritten by the default trace policy. To get the schedule right, one may need to monitor the rate at which trace files are generated on the server as rate of trace file generation depends on the server’s application activity or workload. There is no major harm in picking too frequent a schedule, as the PowerShell script copies files if, and only if, new trace files have been generated.
The PowerShell script provided checks for new trace files generated in the last hour, and hence, the SQL Agent Job schedule needs to be every hour as well. One may tweak the schedule and the script depending upon the server’s workload but the schedule of the SQL Agent job and PowerShell script’s duration of trace file check needs to be same.
Audit Report Generation
Having the trace files copied correctly, reading them, and generating a simple audit report is a fairly straight forward task. The script for this task has been provided below:
SELECT HostName , ApplicationName , LoginName , ServerName , ObjectName , DatabaseName , SessionLoginName , te.name AS EventType , ObjectType , StartTime AS EventTime , TextData FROM fn_trace_gettable('X:\Audit_Trace\log_13502.trc', DEFAULT) x , sys.trace_events te WHERE x.EventClass = te.trace_event_id AND DatabaseName NOT IN ( 'tempdb' ) AND ( te.name LIKE 'object%' OR te.name LIKE 'Audit%' ) AND ISNULL(EventSubClass, 0) = 0 AND te.trace_event_id != 116 AND StartTime BETWEEN '20150201' AND '20150301' ORDER BY StartTime;
One just needs to provide the full path or the first trace file to the “fn_trace_gettable” function. The function rolls over and reads all the trace files that are available on the folder to provide the audit report. One can filter by time to generate reports for a particular audited period. The audit report provides information like the HostName, ApplicationName, LoginName, ObjectName, DatabaseName, Event name and the query involved.
Advantages of the Default Trace Method in Auditing
In addition to pros mentioned previously, there are few more items which favour the default trace for auditing:
- Generating a audit report is simple. With raw trace files and a T-SQL script, even auditors can generate the report themselves
- Even if the DBA deletes one of the trace files to hide any mishap, the gaps in default trace file numbering would indicate this action took place.
- Even if the DBA decides to stop the default trace, the trace stop action is also recorded by the default trace. Also, the lack of trace files for an unusual period would also raise doubts in the minds of auditors. Hence, the method is foolproof most of the time.
- The method has been tested in servers with an immense workload of 7000 – 10,000 batches per second, and there was no performance impact due to it.
The best practice would be to copy and store the trace files in a location that the DBA’s don’t have direct physical access. This would rule out almost any chance of the DBA’s tampering with audit records and events.
Support for SQL Express and SQL Server 2005
If one is using SQL Express and doesn’t have SQL Agent to run the PowerShell script, one can use the Windows Scheduler to perform the same task. The Windows Scheduler can be used to run PowerShell scripts. If one is using SQL Server 2005, which doesn’t have the option in SQL Agent to schedule PowerShell scripts, then one can schedule the job as a Windows command line job and configure the PowerShell script to run in the following way.
c:\windows\system32\WindowsPowerShell\v1.0\powershell.exe -file "D:\audit\audit.ps1" -ExecutionPolicy Unrestricted
As indicated above, one needs to provide the path for “powershell.exe” and pass the PowerShell audit script’s location.
Conclusion
The default trace method for auditing explained above is far from perfect and certainly there is scope for improvement, which I am sure would come via suggestions from the vibrant SQL Community. However, the approach of using default trace for auditing does tick most of the boxes for audit requirements in common shops and allows one to have an auditing mechanism with minimum effort and zero cost.