Auditing Your SQL Server - Part 2

  • Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/sjones/auditingyoursqlserverpart2.asp

  • At our company, we have used triggers in some databases and stored procedures in others to handle table auditing.

    We are concerned about using triggers because we have had some problems with triggers being dropped during table changes. Recently this caused us to lose a months worth of auditing in one of our databases.

    We are planning to standardize our method of handling triggers. What is your opinion on using stored procs instead of triggers?

    I do understand that using stored procedures does not guarantee capturing db changes outside of the application.

  • I would prefer stored procedures, but they do not guarentee capture as they are not necessarily called outside the application.

    In the next part, I'll look at selective triggers, though that still won't help you. The best defense is a strong change management process.

    Steve Jones

    sjones@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/sjones

    http://www.dkranch.net

  • I'm also using triggers for insert and update (delete is not permitted) to audit changes to certain tables. But I'm using the inserted and deleted Tables. Are there any advantages or disadvantages compared to your method

    Regards,

    Frank

    Cheers,

    Frank

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • Not sure what you mean? I'm using inserted and deleted tables.

    Steve Jones

    sjones@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/sjones

    http://www.dkranch.net

  • Sorry, if I expressed myself unclear. The following is an excerpt from BOL

    ...Two special tables are used in trigger statements: the deleted table and the inserted table. Microsoft® SQL Server™ 2000 automatically creates and manages these tables. You can use these temporary, memory-resident tables to test the effects of certain data modifications and to set conditions for trigger actions; however, you cannot alter the data in the tables directly....

    After reading part II I read part I of your article and I think I need to make further explanations.

    As you have written, I'm auditing tables that keep very sensitive financial transactions. So I have an exact copy of the original table and keep track of every change made to a record. That indeed produces a lot of overhead storage, but with today's storage alternatives that is really no problem at all.

    We decided that we ignore storage issues for the benefit of knowing exactly what happens with these records

    Further there is no "heavy traffic" on these tables. I think about 10 persons are allowed to modify data.

    What I want to know is if there is a more elegant way to achieve this level of auditing?

    Cheers,

    Frank

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • I cannot get the columns that are defined as TEXT from inserted and deleted tables and that means I cannot log them using the triggers approach. Is there a workaroand to this problem.

    Also, I wanted to store the audits for all the table into just one table that would store the following columns:

    1) TableName

    2) EntityID-- represents the primary key of the item inserted/updated/deleted

    3) Type of operation(I, U or D)

    4)ModifiedBy

    5)ModifiedDate

    6)XML -- this stores all columsn for the table as an XML string.

    are there any disadvantages to this approach compared to having one separate audit table for each base table that is being logged.

  • Steve,

    I have done the same as you have written about, both the verbose auditing and the partial and triggers have worked fine.

    My question: Are not tools, like LogPI or others that are advertised as zero impact log readers that can log operations you wish to audit?  With these log results, then is it not possible to take them to  a 'mirror' table?

    At some point, I'll do the leg work myself and write an article about it.  But I won't if you've done it or can tell me why not to.

    I enjoy reading your articles.  They're straight forward.

     

    Thanks,

    David.

  • Thanks for the article.  We are using mirror tables with the triggers described.  We are currently weighing pros and cons of storing those audit tables in the same database or in different database.  By storing in the same database, we will be able to capture all changes even if there is a failure. However, by storing in a different database (different server) we will be able to better manage performance.  Any opinions?

  • We are using mirror tables like you describe for some data and they work well.  I took the indexes off the mirror table to cut down on the overhead since they are appended to often and searched rarely.  Why did you think indexes were important?

    Mike

  • Some comments:

    • Using suser_name() won't work in a N-tiers environment
    • Using getdate() gives you inconsistant timestamping across your transaction if you have more than 1 statement. Thus you don't know exactly what happened (was it done in 1 or N transactions?) and you can't reconstruct your past objects. So I use a datetime from the beginning of the transaction to get more consistency (using commit time would be better...if it was possible). Of course, it is not completely serializable but transaction timestamps are somewhat orthogonal since row versioning is used.

    Don't flame me on the somewhat

    For the audit structure, I use a similar approach except that I log only what has changed:

    Insert...

    NULLIF(Inserted.Col1, Deleted.Col1),

    etc.

    That greatly minimize IO. You need to think about NULL handling but in my case I allow only FK & LOBs to be NULL.

    Note to narayan_kumar : SQL2005 support VARCHAR(MAX) in triggers.

    The trick is to reconstruct the row as it was by substituting correct values to NULLs in a View. This can be done efficiently with a recursive CTE. I don't have an example on hand but we discussed it here: http://www.sqlservercentral.com/forums/shwmessage.aspx?forumid=8&messageid=279331#bm280646

    I also experimented on the last comment I made in that discussion:

    "Would be nice to extend SQL to support versionning:

    SET POINTINTIME 2005-05-22 14:05:22 157

    Select ...."

    It is easy to make it work by using SET CONTEXT_INFO to store the POINTINTIME, and have the view filter on it. Doing it this way it is possible to have a true temporal database and move back and forward in time across the entire db.

    Of course, in a production db, I would do heavy historical queries on a mirrored server as the overhead gets higher as you go far back in time.

     

  • narayan_kumar (6/25/2003)


    I cannot get the columns that are defined as TEXT from inserted and deleted tables and that means I cannot log them using the triggers approach. Is there a workaroand to this problem. ...

    Not that this makes a difference to your problem since I'm replying five years later to your post, but BOL says:

    In a DELETE, INSERT, or UPDATE trigger, SQL Server does not allow text, ntext, or image column references in the inserted and deleted tables if the compatibility level is equal to 70. ... If the compatibility level is 80 or higher, SQL Server allows the update of text, ntext, or image columns through the INSTEAD OF trigger on tables or views.

    It might help someone later, but with 2008 now out, hopefully there aren't a lot of SQL 7 installations remaining!

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply