• Gosh, thanks for the extensive writeup. Unfortunately, the only way I've seen to get the table name through a CLR trigger is a nasty hack that uses a tablenameIDnaming convention on the primary key. That, of course, isn't very reliable because someone could change the ame of the PK at anytime and, unless you invoke a DDL trigger of somesort to enforce such a namig convention on new tables, there's also no guarantee that new tables will follow the correct convention.

    Equally as unfortunate, your writeup still makes me think that you're in for even more headaches and performance issues especially since you're considering auditing only the columns that actually changed. If you want to actually compare values between the INSERTED and DELETED tables, then a CLR is still going to be slow on larger data inserts/updates because your CLR will have to load the INSERTED and DELETED tables into memory first. Even if you only use the equivalent of "IF UPDATED(columnname) in your CLR (whatever that equivalent may be... I'm not a C# programmer), you still might not be OK because you have to read the trigger tables into memory before you can interrogate them in a CLR. Of course, that can all be avoided using the hard-coded triggers we spoke of earlier.

    You spoke of a sproc on each table (if I read that correctly) to initially generate and update the hard-coded triggers. I have been able to create a generic proc that will do that for you just by passing it the name of the table you want to create the trigger for. It uses the same method that you're talking about moving to... auditing only the changes and it does so in a "common" audit table. I've not done it myself because I don't audit but a handful of 8 critical tables but I believe you could write a DDL trigger that would detect any schema changes to the tables and auto-magically rebuild the audit triggers on whatever table changed.

    Actually, so of that's not quite right. I audit the changes and send them to a common audit "Staging" table that only has one index to prevent any page splits (all new audit rows go to the logical end of the table). That index, of course, is based on an IDENTITY column having nothing to do with the data on each row.

    I also have a scheduled job that runs once a minute to transfer the audit data from the staging table to the final audit table. Since each row has the table and column name recorded in it as well as the type of action (Insert/Update/Delete) that caused the row to be audited, I would imagine that you could take such a thing further by using the info in the staging table to create your XML rather than trying to do it "in process" during a CLR trigger.

    Of course, that process also clears rows from the staging table once they've been moved. You could do the same thing once the rows have been converted to XML.

    The bottom line is that I think you might actually make performance matters worse by trying to do any of this auditing with a CLR not because CLR is bad but because you end up needing to load the entirety of both trigger tables in order to do anything real with them. A hard-coded trigger using "IF UPDATE(columname)" will run much faster especially for "record changes only" types of triggers especially on wide tables.

    One other thing that you need to worry about whether you use a CLR or a hardcoded trigger. BLOB and certain other datatypes aren't auditable through a trigger. You can tell what action was taken but you can't actually directly audit the changes that occurred. You would need to audit such columns by making a trip to the real table and picking up the new value to store. There's no way that I'm aware of to audit the "before" condition of the data.

    Last but not least, you spoke of using Service Broker to move some of this data. Wouldn't it be easier to setup some form of replication of the audit data and change the data to XML there? I could certainly be incorrect but it seems like you might be using Service Broker as a "poor man's" replictation.

    And thanks for the feedback on the "other" project.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)