Audit systems: a good idea or a mess to maintain?

  • Comments posted to this topic are about the item Audit systems: a good idea or a mess to maintain?

  • They have their uses but in my experience are rarely driven by requirements.  The ones I've seen tend to be massively over engineered and have been put in place by people desperately trying to 2nd guess what the customer needs vs the inadequate statement of what they want.  Failure to 2nd guess correctly results in the project being deemed ITs failure for not satisfying the NSRs (Non-Stated Requirements) that were assumed to be so obvious that they weren't specified

  • David.Poole - Tuesday, February 27, 2018 12:29 AM

    They have their uses but in my experience are rarely driven by requirements.  The ones I've seen tend to be massively over engineered and have been put in place by people desperately trying to 2nd guess what the customer needs vs the inadequate statement of what they want.  Failure to 2nd guess correctly results in the project being deemed ITs failure for not satisfying the NSRs (Non-Stated Requirements) that were assumed to be so obvious that they weren't specified

    Staying with the sad topic of blame apportionment (!) we have implemented a few audits where experience showed that users denied having changed whatever it was and the existence of a table, that could demonstrate when and who and what was done, has rebalanced the conversation beautifully 🙂 
    More neutrally, we find that when users know changes are tracked, more care is often taken, which is generally a good thing.

  • I use a simple approach which works well:

    1. Each table being audited should have a PK (or unique constraint). For each table create an audit table of the same schema (columns and data types). Give the audit table a name something like MainTable_A and add 3 extra columns for DateOfChange, Person, Action.

    2. Create one trigger for insert, update and delete on the main table. The insert and update actions append the 'inserted' rows to the audit table, and the delete action appends the 'deleted' rows to the audit table. The extra audit columns can be populated with GETDATE(), SUSER_SNAME() and I, U or D to indicate the action.

    The result is the audit table is maintained automatically and there is minimal coding effort. The edit history of each PK can be obtained from the audit table, and the most recent row of a PK value in the audit table will match the current value in the main table. The technique is loosely based on what I've seen in commercial apps, also on conversations with colleagues about different ways of implementing auditing,

  • This is just me. I view audit systems as only verbose. I think when you start trying to make your audit system do everything including making changes to the environment with something like say, rolling back a change, then it becomes more than just an audit system. I know it seems like less value if you can't audit something and act on it with the same system, but you start going down a pretty complicated path of turning your audit system into an actual being that can not only tell you the historic history of what has happen, but also act on it by reverting that history if you dislike something or do it automatically based on triggered events.

    I've found just a logging system is good enough for auditing where I can act on the history myself. Not exactly all the bells and whistles of some of these systems, but it's much easier to create, maintain, and implement into any environment holistically.

  • I second Peter Midgley. Everything he said happens in our company too. 

    We use a trigger based system. We track every time a row is add/deleted/changed by user and time , and we record the old data for "important" fields that get changed. But we don't create an entire duplicate database.

    This keeps the space needed for audit logs to a sane level (it's an OLTB system).

    I also have to agree that accountability is a WONDERFUL way to keep users careful...

  • William Rayer - Tuesday, February 27, 2018 4:20 AM

    I use a simple approach which works well:

    1. Each table being audited should have a PK (or unique constraint). For each table create an audit table of the same schema (columns and data types). Give the audit table a name something like MainTable_A and add 3 extra columns for DateOfChange, Person, Action.

    2. Create one trigger for insert, update and delete on the main table. The insert and update actions append the 'inserted' rows to the audit table, and the delete action appends the 'deleted' rows to the audit table. The extra audit columns can be populated with GETDATE(), SUSER_SNAME() and I, U or D to indicate the action.

    The result is the audit table is maintained automatically and there is minimal coding effort. The edit history of each PK can be obtained from the audit table, and the most recent row of a PK value in the audit table will match the current value in the main table. The technique is loosely based on what I've seen in commercial apps, also on conversations with colleagues about different ways of implementing auditing,

    There is absolutely no need to ever audit INSERTs.  All they do is double the storage requirements.  If you INSERT a row and never change it, the original row is in both the original table and the audit table.  Instant duplication of data and audit tables get large enough without that guaranteed duplication of data.

    Only audit updates and deletes (in other words, only audit changes to the data).  If a row is INSERTED and never updated, the original data will be in the original table.  If you update an original row in the original table, the old row (which is the original INSERTED data) will be copied to the audit table and the last update will be in the original table.

    Auditing INSERTs is a complete waste of time and resources and even the built in auditing methods in SQL Server don't audit INSERTs... they only audit modifications.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".
    "Dear Lord... I'm a DBA so please give me patience because, if you give me strength, I'm going to need bail money too!"

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Tuesday, February 27, 2018 7:45 AM

    William Rayer - Tuesday, February 27, 2018 4:20 AM

    I use a simple approach which works well:

    1. Each table being audited should have a PK (or unique constraint). For each table create an audit table of the same schema (columns and data types). Give the audit table a name something like MainTable_A and add 3 extra columns for DateOfChange, Person, Action.

    2. Create one trigger for insert, update and delete on the main table. The insert and update actions append the 'inserted' rows to the audit table, and the delete action appends the 'deleted' rows to the audit table. The extra audit columns can be populated with GETDATE(), SUSER_SNAME() and I, U or D to indicate the action.

    The result is the audit table is maintained automatically and there is minimal coding effort. The edit history of each PK can be obtained from the audit table, and the most recent row of a PK value in the audit table will match the current value in the main table. The technique is loosely based on what I've seen in commercial apps, also on conversations with colleagues about different ways of implementing auditing,

    There is absolutely no need to ever audit INSERTs.  All they do is double the storage requirements.  If you INSERT a row and never change it, the original row is in both the original table and the audit table.  Instant duplication of data and audit tables get large enough without that guaranteed duplication of data.

    Only audit updates and deletes (in other words, only audit changes to the data).  If a row is INSERTED and never updated, the original data will be in the original table.  If you update an original row in the original table, the old row (which is the original INSERTED data) will be copied to the audit table and the last update will be in the original table.

    Auditing INSERTs is a complete waste of time and resources and even the built in auditing methods in SQL Server don't audit INSERTs... they only audit modifications.

    Quite so; however for completeness of information the 'original' table could include an applicationUser column and a DateLoaded column.

  • There is a lot of excellent advice on this thread. My contribution is to caution against over-engineering and complexity. I have seen audit systems that were so full of data and so cumbersome that they were almost impossible to understand. Yeah the audit data is in there somewhere, thanks a heap for meeting your requirement, but good bleeding luck finding what you need. Give design consideration to usability.

  • In my previous job we didn't do much auditing. There was some. We knew who had touched some tables, both for inserts and updates, but that wasn't done in many places.

    In my current position one thing that really struck me was just how much they audit. It is very impressive. It's easy to say who did what and when. And even in some cases what machine they did it on. I'm quite impressed with how my current work environment does auditing. Kudos to them!!

    Kindest Regards, Rod Connect with me on LinkedIn.

  • You think audit is hard? Wait until you're asked to do approval workflows -- where different users may have different rights to different attributes of the same entity...
    As other posters have mentioned, adding UserIDs and datetimes to rows and either using triggers or (2016+) temporal tables is a good quick solution, if it matches the business requirements.
    I think that's why we don't see more out-of-the-box audit specifications: different business teams have different audit requirements.

    roger.plowman - Tuesday, February 27, 2018 6:43 AM

    (it's an OLTB system).

    On-Line Transaction Bodging?
    Onion, Lettuce, Tomato, Bacon?

  • sknox - Tuesday, February 27, 2018 1:16 PM

    You think audit is hard? Wait until you're asked to do approval workflows -- where different users may have different rights to different attributes of the same entity...
    As other posters have mentioned, adding UserIDs and datetimes to rows and either using triggers or (2016+) temporal tables is a good quick solution, if it matches the business requirements.
    I think that's why we don't see more out-of-the-box audit specifications: different business teams have different audit requirements.

    roger.plowman - Tuesday, February 27, 2018 6:43 AM

    (it's an OLTB system).

    On-Line Transaction Bodging?
    Onion, Lettuce, Tomato, Bacon?

    Oops, OLTP of course. The dangers of posting pre-caffeine!

  • Many data audit systems are created in reaction to an event, like a security breach or while troubleshooting an issue that requires detailed diagnostics. Therefore, audit processes are put in place hastily, sometimes irrationally, and outside the normal development and deployment process. That's why it's best to put some thought into it, maybe follow a standard best practice or pattern that's already been proven effective.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • There is absolutely no need to ever audit INSERTs. All they do is double the storage requirements. If you INSERT a row and never change it, the original row is in both the original table and the audit table. Instant duplication of data and audit tables get large enough without that guaranteed duplication of data.

    If you don't audit the first INSERT, how do you know who inserted the row or when it was inserted? The only way I could think of doing this is by adding extra columns to the original table.

  • William Rayer - Thursday, March 1, 2018 1:27 AM

    There is absolutely no need to ever audit INSERTs. All they do is double the storage requirements. If you INSERT a row and never change it, the original row is in both the original table and the audit table. Instant duplication of data and audit tables get large enough without that guaranteed duplication of data.

    If you don't audit the first INSERT, how do you know who inserted the row or when it was inserted? The only way I could think of doing this is by adding extra columns to the original table.

    what I said !!!

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply