Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12345»»»

An Audit Trail Generator Expand / Collapse
Author
Message
Posted Monday, August 29, 2005 10:31 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Today @ 5:59 AM
Points: 138, Visits: 260
Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/lPeysakhovich/anaudittrailgenerator.asp


Post #215153
Posted Monday, October 10, 2005 12:35 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: 2 days ago @ 7:16 PM
Points: 15, Visits: 30

This is an excellent article which describes a simple and generic solution to database auditing. The solution is reduced to the essential minimum and therefore is mathematically elegant.

Coincidently, only 2 months ago I have created a generic auditing solution for one of my clients. The requirements were similar to Leo's with some differences, and my solution was slightly different. I was thinking of writing an article about it, but I was too busy and now after Leo's article it probably does not make much sense. So I decided to add a few comments here describe some highlights of my solution.

Requirements

My users wanted to have a full flexibility of deciding which tables and which operations are audited. For instance, they might've wanted to audit only insert operations to one table, but any data modifications on another table. Also, they wanted to be able to enable and disable auditing dynamically to bypass auditing for certain bulk operations.

This was catered for by introducing the AuditControl table:

Operation CHAR(1),         -- 'I', 'U', or 'D'
TableName SYSNAME,
IsAuditOn BIT

Another requirement was to have a single audit table instead of one audit table per audited table.

The next requirement was very important, and is adding some complexity. The customer did not want to loose the audit records when table definition changes. Metadata changes are not something unusual during the software lifecycle, and I'd like my audit solution to sail safely through these changes.

And finally, we designed the database from scratch, and had a luxury of  consistent column naming and primary keys on each audited table.

AuditLog table

My AuditLog table was designed to be generic as it is used to store changes to an arbitrary table.

The AuditLog table has columns for audited table name, content type ('M' for metadata, 'B' for "before" data image, and 'A' for the "after" image), unique identifier of the event (for instance, a mass update of 100 rows is a single event), and all default-populated current context fields (DateTime, SystemUser, DBUser, Workstation, Application). The actual data is stored in the Content column.

For each data modification statement our AuditLog would contain:

1. A row with table's metadata, which list column names and their data types.

2. Pre-modification image of data rows affected by the operation (for updates and deletes).

3. Post-modification image of data rows affected by the operation (for updates and inserts).

'Metadata' content is simply a CREATE TABLE statement. If the user decides to analyse some old values, they can simply use it to create a new table and populated it with data from the after-image rows.

'Before' and 'After' images of data rows are stored in strings with column values concatenated and delimited by a configurable delimiter string.

Setting up auditing on a table

In my implementation setting up an auditing trigger on a database table is a semi-manual operation. I created a Query Analyser template with a single parameter - TableName. To set up auditing on a table I open my template in Query Analyser and use Edit-SubstituteTemplateParameters menu to generate a trigger for a new table. This is not as automated as I'd like it to be, but that's how it is at the moment. Currently I have audit triggers set up on all tables, but only some of the operations are configured as auditable (this is done through the AuditControl table described earlier.

Anatol Romanov

 

Post #227527
Posted Monday, October 10, 2005 7:41 AM


Say Hey Kid

Say Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey Kid

Group: General Forum Members
Last Login: Tuesday, February 05, 2013 4:24 AM
Points: 685, Visits: 104
maybe you should go ahead and do the article.in this ever evolving world of changing requirements i think i don't mind the redundancy of two or more articles on auditing.

as for me on one of the projects we implemented a single table for auditing the db changes.but i was not satisfied with the solution.

so please do go ahead.



Everything you can imagine is real.

Post #227604
Posted Monday, October 10, 2005 7:44 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Wednesday, May 17, 2006 5:26 AM
Points: 12, Visits: 1

Leo's solution is so similar to a solution I developed years ago for a number of my clients that I thought my code may have somehow been offered up to the public domain!!!

Paralleling Leo's article, I had even developed a couple of stored procedures that create

(1) the audit table(s) and

(2) the "change" triggers that post the modified row information to the audit table(s)...uncanny!

I create an audit ("change") table for each "production" table in the database.  My audit method records both the "before" and "after" (a.k.a. "old" and "new") versions of all the columns of a modified production table row into the corresponding change table. 

For example, given the following production table:

CREATE TABLE [dbo].[ProdTable] (
 [Prod_ID] [int] IDENTITY (1, 1) NOT NULL ,
 [Prod_Col1] [varchar] (40) NULL ,
 [Prod_Col2] [int] NULL ,
 [Prod_Col3] [int] NOT NULL) ON [PRIMARY]
GO

ALTER TABLE [dbo].[ProdTable] ADD
 CONSTRAINT [PK_ProdTable] PRIMARY KEY  CLUSTERED
 ([Prod_ID]) ON [PRIMARY]
GO

I would have a corresponding change table looking like so:

CREATE TABLE [dbo].[Chg_ProdTable] (
 [Chg_Trans_ID] [int] IDENTITY (1, 1) NOT NULL ,
 [Chg_Trans_Type] [char] (1) NOT NULL ,
 [Chg_Trans_Date] [datetime] NOT NULL ,
 [Chg_Trans_User_ID] [varchar] (50) NOT NULL ,
 [Old_Prod_ID] [int] NULL ,
 [New_Prod_ID] [int] NULL ,
 [Old_Prod_Col1] [varchar] (40) NULL ,
 [New_Prod_Col1] [varchar] (40) NULL ,
 [Old_Prod_Col2] [int] NULL ,
 [New_Prod_Col2] [int] NULL ,
 [Old_Prod_Col3] [int] NULL ,
 [New_Prod_Col3] [int] NULL) ON [PRIMARY]
GO

ALTER TABLE [dbo].[Chg_ProdTable] ADD
 CONSTRAINT [PK_Chg_ProdTable] PRIMARY KEY  CLUSTERED
 ([Chg_Trans_ID])  ON [PRIMARY]
GO

The first four columns of each of my audit tables records a unique transaction ID for the row, the type of transaction ("I"nsert, "D"elete, "U"pdate), the date/time stamp of the transaction, and the user ID of the person performing the change.  My triggers query the inserted and deleted tables in order to pull the requisite data to be pumped into the audit table(s).

It's good to see that my work has been more-or-less validated by others...thanks!

Post #227605
Posted Monday, October 10, 2005 8:15 AM


Say Hey Kid

Say Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey Kid

Group: General Forum Members
Last Login: Tuesday, February 05, 2013 4:24 AM
Points: 685, Visits: 104
how do you take care of the reporting side of things.


Everything you can imagine is real.

Post #227635
Posted Monday, October 10, 2005 8:44 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, January 30, 2014 10:08 PM
Points: 1,038, Visits: 444

Very good article - I have to implement something like this soon.  Whilst I usually ensure that I do all data manipulation from apps via stored procedures, it would be nice to have it happen at a table level to also capture things such as manual table alterations, etc.

My other concern, which seems to be addressed by Anatol Romanov, was that what happens when you change the table definitions?  You would have to also modify the audit tables in a similar way (which I know I would forget) and it may not be valid to just put nulls into those new columns in the audit table - how do you know they weren't actually null!?  I think I might implement something along the lines given by Anatol - sure, it is a bit of a name, value table but it provides flexibility and the data shouldn't be queried all that often.

Another thing to consider is some sort of global "session" variable, like application name, that could be set before certain operations are performed and stored as an extra column.  I'd probably have an extra table which is an overall "audit transaction" table - there would be one row storing the details such as hostname, app name, date, username, etc and an audit ID.  This could then be linked to the audit data table storing the before & after values for each column for each modified row.

Good discussion & nice article!




Post #227643
Posted Monday, October 10, 2005 8:45 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, March 19, 2014 10:38 AM
Points: 100, Visits: 140
I am considering canceling my subscription to SQL Central’s email newsletter as well as the SQL Server Standard Magazine. Why? I am tired of reading articles written by DBAs who feel they are superior to developers but cannot seem to put together a sentence without making a spelling or grammatical error. There are numerous examples of poor grammar in this article, such as:
“It was very sad that developers did the initial database design without consideration of data auditing (what is surprise!) ...”
If you’re going to have a condescending attitude, please do us all a favor and read your own article first. I really wonder how great your DBA work is, considering this article.



Post #227644
Posted Monday, October 10, 2005 8:51 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, September 29, 2006 3:29 PM
Points: 1, Visits: 1

Hi,
what will happen if table has a text field? Inserted and updated tables do not contain text field data.




Post #227647
Posted Monday, October 10, 2005 9:33 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Monday, July 03, 2006 12:06 PM
Points: 1, Visits: 1
I have found that Idera's Compliance Manager is the best solution for all compliance issues.

The difference between 'involvement' and 'commitment' is like an eggs-and-ham breakfast: the chicken was 'involved' - the pig was 'committed'.
Post #227668
Posted Monday, October 10, 2005 10:29 AM
SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Tuesday, April 01, 2014 8:03 PM
Points: 6,266, Visits: 2,027

Although the article is well written  and many of us have used this sort of techniques in one way or another we should not be blinded by its drawbacks.

1.The use of dynamic SQL in triggers is by all means NOT good. From performance perspective you may need to staore DELETED and INSERTED into Temptables increasing significantly the ovehead of the trigger.

2. As someone already posted you will need to cater for TEXT columns which are not accessible from this kind of triggers.

3.I don't think that metadata volatility is an excuse to use dynamic SQL. Instead the trigger code should be regenerated at metadata change time (IN MY OPINION).

4.This solutions when used across 100+ databases become impractical and Things like Idera's Compliace manager will change that landscape due tue the stability nature of a triggersless solution. Only when the number of databases is "manageable" is that a solution like this should be implemented.

Just my $0.02

 




* Noel
Post #227686
« Prev Topic | Next Topic »

Add to briefcase 12345»»»

Permissions Expand / Collapse