An Audit Trail Generator

  • If a text field is a part of the table, then separate statement should be added as a part of the trigger for the table. But it will bring the performance a little bit down.  In generator, this is can be analyzed.

    There will be 3 solutions:

    1. Update record in the audit table

    2. Create a new record for the text column only in the audit table, e.g. will be 2 records if text and the other fields are updated.

    3. Create separate table with next columns:  TableName, TEXTCOLUMNNAME, TextColumnValue and insert the text changes for all tables in database into one table.

    I think, the third one is the most elegant and has the best performance results.

    Leo P.

  • I have used such triggers for a while. Performance degradation is about 20%.

    For non-heavy OLTP systems this is a simple solution that is working. We got 200+ databases every year and generator actually made the live is much easier. In addition, table and database parameters can be added to make generator completely dynamic without compiling the same stored procs over and over again. In my case it was completely dynamic solution and all procs were stored in support database. All I need to provide database name as parameter, flag for all tables or comma separated list of tables. Then the generator is creating a script and script should be applied to the database.

  • As we all know - each project has different auditing requirements, and there is no magic solution, I'm afraid. Let's consider just some of the possible requirements:

    1. Metadata never changes.

    2. Metadata changes frequently because user tables are created dynamically by the application.

    3. No text or image datatypes are present.

    4. Text or image datatypes may exist, but we are only interested in the fact they've been changed, not the actual before/after values.

    5. All tables are owned by the dbo.

    6. Any user can create a table which is to be audited.

    7. Every single change to data has to be recorded into audit log.

    8. Only selected operations on selected columns should be audited. The application admin (not a DBA) should be given a tool to configure what's audited.

    9. The reason to set up data audit is to satisfy a statutory requirement. Nobody is interested in querying the audit data effectively.

    10. The audit data is to be queried on a regular basis, effective table design and indexing is important.

    So far, there was nothing a good database specialist couldn't address. Now a few requirements representing a challenge in SQL2K:

    11. Operations which don't fire a trigger must also be audited. This includes data access without modification (SELECT) and TRUNCATE TABLE statements.

    12. Changes to business objects rather than database objects should be audited. A business object may be spread over a number of database tables.

    13. For systems where auditing is not critical, failing to insert a record into the audit table shall not cause the data modification to rollback.

    I am most interested to discuss the last three.

    Anatol

  • Well for 11, you might have to have a profiler trace with very particular filters running to catch such statements against your database.

    For 12, I assume you mean recording say, an order and its detail lines (which would hopefully be in two separate tables!) - might be easier to piece it back together from the audit table as required.

    For 13...  The new queues in SQL 2005 might be helpful as the records could be asynchronously processed to avoid blocking.  In SQL 2000, you could get really inefficient and use some COM calls to put things into MSMQ - yuk!   Or, make sure that if your app is querying from the audit tables, it does so with no locking.  Also have error checking so that if an insert fails it can be retried.

    On that note - are you saying that the record may fail to be inserted due to blocking or due to primary key violations?

    For the other points, yes, text fields would require that a join is done back to the original table, which might decrease performance, but if the auditing is worth it, then you pay the price.

    To catch all tables as they are created (roughly), you could schedule a job to recreate all of the triggers, possibly taking some configuration info from a central config table so that certain tables/columns could be exempt from auditing.

  • #11-13 can be implemented few different ways:

    1. Use Log reader software. Especially for #11.

    2. Use completely third party audit solution. For example, Lumigent Entegra.

    3. Use alerts to separate actual insert/update/delete with audits (#13). But you will pay the price to get data from the actual table by PK.

    4. #12 can be done by using transaction ID and transaction ID should be the part of any modification. So, if multiple tables are changed in the same transaction then each row will have a transaction ID and metadata will have transaction id and tables changed in the transaction. It is more complicated but working solution. Especially in the businesses where government requirements are very strict. For example, medical research. But again, everything comes with the price. It will be extra step to obtain a transaction id first.

    Based on business conditions other solutions can be discussed

     

     

  • One issue that the "Trigger" solution does not address is how to deal with web or multi-tier applications.  In most cases the suser_sname will be the credentials of the middle tier and the same for every user.  Unless there is a requirement that every database update capture an application level userID parameter and write that to the table being audited.  Even with this requirement, the "Delete" function will not capture who did the delete because the only info available will be the before image of the rows deleted.

    I am interested in hearing if anyone has anyone has some ideas on how to deal with auditing these types of applications.

    Thanks in advance...

    😛

  • 1. To address issues for deletion you may store not an after row but the current row and the type of transaction. It will allow handle the deletion and will eliminate necessity to join 2 tables to get full history report.

    2. To get userid (application always has it) you may have column modification user in main table and always update/insert it for each transaction. For deletion you will update the row first with userID and then delete it. It means that for each deletion will be 2 statements (overhead). But it will save required information. If changes will be done outside of application (for example directly from Query Analyzer) then use default user.

    3. You can use the log reader software to get the information if there are no requirements to real time audit.

  • What about modifying the application name or hostname parameters in the OLEDB connection string?  They are easily retrievable from SQL and needn't contain the data implied by their name - eg you could have "My App - Module 1. Recording inventory", "My App - Module 2. Raising purchase order"...  Pretty good description of the transaction, and could also include the userID, etc...  Sure, there are multiple data elements possibly in the single string, but the audit trail, I assume, would be viewed by someone, not joined to other tables on a regular basis and certainly not joined based on the fields in the string...  That would help with some of the issues.

  • Yep, can be done this way. Good proposal

  • "select * into #td from deleted"

    In the trigger, the data is copied into the mirror table. I've done similar, but cannot use the generic statement with timestamps (and have typed all the fields - Select a,b,c,d,e,f.... into #td from deleted). Any ideas on a generic statement that just skips the timestamps (or some other type of field we choose to ignore such as an image..)?

  • You need try it. I don't know if timestamp is exclusion. I did not  test it with timestamp column.

  • I AM USING SQL SERVER 2005 AND I HAVE THE FOLLOWING TABLE

    District

    (

    row_id int,

    DistrictNum int,

    Region varchar(50),

    D_Addr1 varchar(50),

    D_Addr2 varchar(50),

    D_City varchar(50),

    D_Phn varchar(15),

    D_Tollfree varchar(15),

    D_Zip varchar(10),

    D_JurPhn varchar(15),

    D_State char(2),

    Budget_Org varchar(50),

    Cost_Org varchar(50),

    Cost_Mile money,

    Attend_Fee money

    )

    And the following Audit table

    a_District(audit_id, modid, row_id, DivisionNum, DistrictNum, Dv_Name, Dv_Addr1, Dv_Addr2, Dv_PO_Box, Dv_City, Dv_Zip, Dv_PhoneNum, Parkfee, Subsistence, SubsistenceLast, NumJurors)

    I have setup the p_trigger procedure, and the following trigger t_District_A

    --------------P_TRIGGER--------------------

    USE [JuryDB]

    GO

    /****** Object: StoredProcedure [dbo].[p_trigger] Script Date: 04/15/2008 16:43:16 ******/

    SET ANSI_NULLS ON

    GO

    SET QUOTED_IDENTIFIER ON

    GO

    ALTER PROCEDURE [dbo].[p_trigger]

    @tblname VARCHAR(128)

    as

    BEGIN

    declare @cnti bigint, @cntd bigint, @maxid bigint, @minid bigint, @cmd varchar(4000), @audittblname varchar(128),

    @cmdINSERT varchar(1000), @cmdSELECT varchar(1000), @modif_id bigint,

    @cmdFROM varchar(255), @AUDITINSERT varchar(255), @DUI varchar(10)

    set @audittblname = 'a_' + substring(@tblname,2,128)

    declare @Tmp table (cname varchar(128), cid int identity(1,1) )

    select @cnti = count(*) from #ti

    select @cntd = count(*) from #td

    -- check how many rows changed. If 0 then do nothing

    IF (@cnti = 0 and @cntd = 0)

    return 0

    -- get all table columns

    insert into @Tmp(cname)

    select COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS

    WHERE TABLE_NAME = @tblname

    select @maxid = max(cid), @minid = min(cid) from @Tmp

    set @cmdSELECT = ' SELECT '

    set @cmdINSERT = ' INSERT INTO ' + @audittblname + ' ( '

    while (@minid <= @maxid)

    begin

    -- begin while

    select @cmdINSERT = @cmdINSERT + cname + ',' , @cmdSELECT = @cmdSELECT + 'd.' + cname + ','

    FROM @Tmp

    where cid = @minid

    set @minid = @minid + 1

    end

    -- end while

    -- always set new rows for the AUDIT

    IF (@cnti = @cntd )

    begin

    set @DUI = ' ''U'' '

    SET @cmdFROM = ' FROM #ti d '

    END

    IF (@cnti < @cntd)

    begin

    set @DUI = ' ''D'' '

    SET @cmdFROM = ' FROM #td d '

    END

    IF (@cnti > @cntd)

    begin

    set @DUI = ' ''I'' '

    SET @cmdFROM = ' FROM #ti d '

    END

    -- insert record into table modif

    insert into modif(tablenm, dml_type)

    select @tblname, @dui

    -- get identity

    select @modif_id = SCOPE_IDENTITY( )

    -- add modification column value

    set @AUDITINSERT = ' modid )'

    set @cmd = @cmdINSERT + @AUDITINSERT + @cmdSELECT + cast(@modif_id as varchar) + @cmdFROM

    exec (@cmd)

    SET NOCOUNT OFF

    END

    ----------t_District A----------------------

    USE [JuryDB]

    GO

    /****** Object: Trigger [dbo].[t_District_A] Script Date: 04/15/2008 16:40:14 ******/

    SET ANSI_NULLS ON

    GO

    SET QUOTED_IDENTIFIER ON

    GO

    ALTER TRIGGER [dbo].[t_District_A]

    ON [dbo].[District]

    AFTER INSERT, UPDATE, DELETE

    AS

    BEGIN

    declare @tblname varchar(128)

    SET NOCOUNT ON

    set @tblname = 'District'

    --begin

    select * into #td from deleted

    select * into #ti from inserted

    exec dbo.P_TRIGGER @tblname = @tblname

    --end

    SET NOCOUNT OFF

    END

    GO

    EXEC sp_settriggerorder @triggername=N'[dbo].[t_District_A]', @order=N'Last', @stmttype=N'DELETE'

    GO

    EXEC sp_settriggerorder @triggername=N'[dbo].[t_District_A]', @order=N'Last', @stmttype=N'INSERT'

    GO

    EXEC sp_settriggerorder @triggername=N'[dbo].[t_District_A]', @order=N'Last', @stmttype=N'UPDATE'

    -------------------------MY MODIF TABLE---------------------------------

    modif

    (

    modid int,

    tablenm varchar(50) ,

    dml_type char(1),

    insertdt datetime ,

    insertuser varchar(128),

    insertmachine varchar(128) ,

    insertprocess varchar(128)

    )

    ----------------------------PROBLEM---------------------------------

    WHENEVER I GET READY TO RUN AN INSERT WHICH CALLS P_TRIGGER, I GET THE FOLLOWING ERROR MSG

    Msg 8152, Level 16, State 14, Procedure p_trigger, Line 61

    String or binary data would be truncated.

    The statement has been terminated.

    WHAT AM I DOING WRONG????????? PLEASEEEEEEEEEEEEEEEEEEEEEEEEE HELP - I HAVE BEEN RACKING MY BRAIN WITH THIS AND HAVE BEEN UNSUCCESSFUL THUS FAR

    I WOULD APPRECIATE ANY HELP ANYBODY CAN GIVE

  • Does it give you that same error even when the trigger is off?

    Are you sure that the tables definition match in the base and audit tables (column types AND sizes)?

    Do you do concatenation to add data when you insert it in the audit table?

    Have you printed the insert statement and manually run it?? Maybe there's just a stupid type there that's causing the error.

  • I just think that anyone who tries to use Dynamic SQL in a trigger is asking for trouble....

    😛

  • The tables match, I am not concatenating anything when I insert, and i tried the query with the trigger turned off and it worked fine.

Viewing 15 posts - 16 through 30 (of 43 total)

You must be logged in to reply to this topic. Login to reply