Update Trigger Updates Correct Record In Another Table

  • There is a case for auditing inserts - it depends on what you are using the audit trail for.

    If you want to report on full history of the data then you will need the current state. It's possible that you don't want to access or give access to the production table for this so will need the current row in the audit table. For this you would usually audit the new values of updated rows rather than the old values. Some systems audit the pair for each update which is wasteful but could make some things easier.

    It's fairly common to find that someones been given access to the audit trail on the production system to trace problems and they are running long ad hoc queries which severly affect the performance.


    Cursors never.
    DTS - only when needed and never to control.

  • Ninja's_RGR'us (9/29/2007)


    Sorry Jeff, but I only audit full rows, not columns sot he 5:1 ratio doesn't apply.

    2 what do you do about the rule that says that auditing data should be kept separate from the base table and not accessible to the users?

    Heh... don't be sorry... that's can be the smart way to go if you really want easy and full accountablility. Beats the hell out of reassembling rows, sometimes. But, still, why capture the initial row? It's in the original table and the trigger will catch all changes. You still have at least a 2:1 overhead using that method.

    For the other thing, how do you meet that rule? 😉 In SQL Server 2000, current user has to be able to write to the audit table in order for the trigger to work. In SQL Server 2005, I believe there's a method to do a built in proxy but I don't know the command off the top of my head...

    If you really want to deny users the ability to get at the audit logs, look into "C2 Auditing". Haven't tried it myself, but it supposedly records even unsuccessful attempts at data access. Dunno if it actually audits data though... I'd have to do a bit more research because, like I said, I haven't used it before.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • nigelrivett (9/29/2007)


    There is a case for auditing inserts - it depends on what you are using the audit trail for.

    If you want to report on full history of the data then you will need the current state. It's possible that you don't want to access or give access to the production table for this so will need the current row in the audit table.

    True enough... just prepare for your little 10 gig database to actually occupy 60+ giga-bytes 😉

    For this you would usually audit the new values of updated rows rather than the old values. Some systems audit the pair for each update which is wasteful but could make some things easier.

    True enough... having only the new value column will save some bytes... an average of about 7 per row... still have the 5:1 overhead of auditing inserts. Again, if that's what you need to do, so be it... but you also need to explain to the powers that be that their plans for harddisk space need to be drastically changed.

    It's fairly common to find that someones been given access to the audit trail on the production system to trace problems and they are running long ad hoc queries which severly affect the performance.

    That's when we make a certified copy of the data on a special server that only the audit team has access to. :hehe:

    Ok, let's shift gears a bit and talk about the main problem with auditing... performance!...

    First, I hate trigger code because it's semi-hidden code that everyone seems to forget about when a performance problem rears it's ugly head. That's normally when they decide to spend the money on bigger, faster, better hardware only to find out that it just didn't help because of the crap code in things like audit triggers.

    Second, the reason for the crap code in audit triggers is that everyone is lazy! Hell, I am too! The last thing I want to do is write dedicated static audit triggers for 200 different tables. So, everyone goes for some form of "generic" solution where they can copy nearly the same code for every trigger that doesn't actually involve having to know any column names on the original table. That's where the problem starts... when I tested the code at the URL that Phil mentioned, I found the code was quite clever... it was also agonizingly slow! C'mon... 4 seconds to do a piddley 3 inserts, 4 updates, and one full delete of 3 rows?? Not good. If you're going to make it "generic", ya gotta test for performance or your boss will have you looking for a new job.

    So, what to do? Write some code for one table that meets both the needs of whatever auditing system you need to have in place and the need for speed. Then, write some dynamic SQL that will generate similar static code for every table you need to audit. Execute the results, and you're done. If a table changes (hopefully, you have some form of configuration management and change controls), then make it a requirement that the newly generated audit trigger code be submitted with it.

    And, don't forget... returned rowcounts can kill GUI code depending on the way it was written... always do a SET NOCOUNT ON in the trigger... such settings only change what's happening in the trigger and don't roll-up to the calling action.

    Final thought on audit triggers... remember that, someday, you're gonna need to read the audit table and put rows back together for given points in time... having textualm multi-column primary key listings will make that effort more difficult and the queries will run slow. During the design of your database, consider a surrogate key using an IDENTITY column for any table that has a composite PK... use an alternate key (Unique Index) on those composite keys and use the IDENTITY column as the PK... makes auditing and examining audits a whole lot easier.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Don't get too excited in anticipation of C2 auditing. When I tried it it just gave a blizzard of irrelevant information along with the nuggets, which were, as a result, hard to find. It'd dead useful in order to put a tick in the box when the compliance officer asks if you are C2 compliant, but I can't see anyone assembling an audit trail from it in a hurry!

    Best wishes,
    Phil Factor

  • Imagine a db with 100 000 transactions per MONTH

    Users have write access to all the tables... and you need to cover your but because there are not all that techno savy...

    now you'll get my picture :hehe:.

  • Jeff Moden (9/30/2007)


    nigelrivett (9/29/2007)


    There is a case for auditing inserts - it depends on what you are using the audit trail for.

    If you want to report on full history of the data then you will need the current state. It's possible that you don't want to access or give access to the production table for this so will need the current row in the audit table.

    True enough... just prepare for your little 10 gig database to actually occupy 60+ giga-bytes 😉

    For this you would usually audit the new values of updated rows rather than the old values. Some systems audit the pair for each update which is wasteful but could make some things easier.

    True enough... having only the new value column will save some bytes... an average of about 7 per row... still have the 5:1 overhead of auditing inserts. Again, if that's what you need to do, so be it... but you also need to explain to the powers that be that their plans for harddisk space need to be drastically changed.

    It's fairly common to find that someones been given access to the audit trail on the production system to trace problems and they are running long ad hoc queries which severly affect the performance.

    That's when we make a certified copy of the data on a special server that only the audit team has access to. :hehe:

    Ok, let's shift gears a bit and talk about the main problem with auditing... performance!...

    First, I hate trigger code because it's semi-hidden code that everyone seems to forget about when a performance problem rears it's ugly head. That's normally when they decide to spend the money on bigger, faster, better hardware only to find out that it just didn't help because of the crap code in things like audit triggers.

    Second, the reason for the crap code in audit triggers is that everyone is lazy! Hell, I am too! The last thing I want to do is write dedicated static audit triggers for 200 different tables. So, everyone goes for some form of "generic" solution where they can copy nearly the same code for every trigger that doesn't actually involve having to know any column names on the original table. That's where the problem starts... when I tested the code at the URL that Phil mentioned, I found the code was quite clever... it was also agonizingly slow! C'mon... 4 seconds to do a piddley 3 inserts, 4 updates, and one full delete of 3 rows?? Not good. If you're going to make it "generic", ya gotta test for performance or your boss will have you looking for a new job.

    So, what to do? Write some code for one table that meets both the needs of whatever auditing system you need to have in place and the need for speed. Then, write some dynamic SQL that will generate similar static code for every table you need to audit. Execute the results, and you're done. If a table changes (hopefully, you have some form of configuration management and change controls), then make it a requirement that the newly generated audit trigger code be submitted with it.

    And, don't forget... returned rowcounts can kill GUI code depending on the way it was written... always do a SET NOCOUNT ON in the trigger... such settings only change what's happening in the trigger and don't roll-up to the calling action.

    Final thought on audit triggers... remember that, someday, you're gonna need to read the audit table and put rows back together for given points in time... having textualm multi-column primary key listings will make that effort more difficult and the queries will run slow. During the design of your database, consider a surrogate key using an IDENTITY column for any table that has a composite PK... use an alternate key (Unique Index) on those composite keys and use the IDENTITY column as the PK... makes auditing and examining audits a whole lot easier.

    Not sure what your point is. You seem to be agreeing with me that the audit solution needs to be tailored to the system. There is a place for a lot of different solutions and it's up to you to choose which is best. Trying to implement a single solution in all instances is doomed to failure and often performanceisn't the most important feature - nor is space (but your calculations apart from mostly being wrong are making assumptions - try thinking of occasions where the same solution would save space).


    Cursors never.
    DTS - only when needed and never to control.

  • Show me why the space calculations are wrong or why any calculation can justify the unnecessary space wasted and unnecessary performance hit of auditing inserts.

    And, although you said...

    "Trying to implement a single solution in all instances is doomed to failure"

    that's exactly the type of code you wrote in the very example I'm talking about. :hehe:

    And performance isn't the most important thing? True enough, but it's a very very close second. Stay up all night babysitting so long winded 8 hour code that you could rewrite to run in 10 minutes and tell me performance isn't important. Try fitting a couple of dozen of those 8 hour runs into a single night and tell me that performance isn't important. Try doing a rerun at critical month end before the taxman cometh on one of those 8 hour runs and tell me performance isn't important.

    And, I'm waiting for your rendition of the space calculations... should be interesting based on what you said...

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 7 posts - 16 through 21 (of 21 total)

You must be logged in to reply to this topic. Login to reply