Logging (auditing) in the table vs. having a logging (auditing) table

  • Rod at work

    SSC-Dedicated

    Points: 33402

    I really like Steve Jones' editorial today, Carefully Giving Normal Users Security Permissions. I've got a question about something Steve brought up, but I didn't want to ask it in response to the editorial, as I thought my question was too far off topic.

    In my current job, I've seen the practice of logging (auditing) users activity to a much higher degree, than I did at my old job. And I think my current employer and colleagues are doing a much better job on this, than I did at my old job. Kudos to them!!

    Their practice is to add four columns to every table: CreatedBy, CreatedOn, UpdatedBy and UpdatedOn. (They might not do this for lookup tables, but every other table has those four columns.) But Steve seems to favor using a separate logging/audit table, to having those four columns in each table. What I'd like to ask is this, is using a separate log/audit table a better practice? Or is it just a preference?

    Kindest Regards, Rod Connect with me on LinkedIn.

  • Jeff Moden

    SSC Guru

    Points: 996863

    It's (a separate table) is a better practice for multiple reasons especially if it can't be altered without all hell breaking loose.  The 4 columns you speak of (I refer to them collectively as "LMB columns" and "poor-man's auditing") doesn't keep a history and the Updated_By column is a source of severe fragmentation because almost everyone starts it off as a NULL and only populates it during or after a modification.  Temporal tables were a start in the right direction but they don't have a way to determine who or what did the modification auto-magically.

    And, as you know, properly updating that Modified_By and date columns should be done by a trigger.  AFTER triggers take a relatively long time and cause secondary updates and INSTEAD OF triggers are a whopping PITA.  I DO wish MS would come up with the true concept of a BEFORE trigger if for no other reason that to support "Poor-Man's auditing".

    I started the "discovery" process on other methods but the company decided that it would be too much of a change because a whole lot of people wrote code against the LMB columns that they started using 12 years ago. <headdesk>.

    Heh... and I guess "better" is seriously relative to what one is used to.  Considering the problems associated with LMB columns, I wouldn't say that they a "lot better" than what you were doing.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".
    "If "pre-optimization" is the root of all evil, then what does the resulting no optimization lead to?"

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeffrey Williams

    SSC Guru

    Points: 88607

    Jeff Moden wrote:

    It's (a separate table) is a better practice for multiple reasons especially if it can't be altered without all hell breaking loose.  The 4 columns you speak of (I refer to them collectively as "LMB columns" and "poor-man's auditing") doesn't keep a history and the Updated_By column is a source of severe fragmentation because almost everyone starts it off as a NULL and only populates it during or after a modification.  Temporal tables were a start in the right direction but they don't have a way to determine who or what did the modification auto-magically.

    And, as you know, properly updating that Modified_By and date columns should be done by a trigger.  AFTER triggers take a relatively long time and cause secondary updates and INSTEAD OF triggers are a whopping PITA.  I DO wish MS would come up with the true concept of a BEFORE trigger if for no other reason that to support "Poor-Man's auditing".

    I started the "discovery" process on other methods but the company decided that it would be too much of a change because a whole lot of people wrote code against the LMB columns that they started using 12 years ago. <headdesk>.

    Heh... and I guess "better" is seriously relative to what one is used to.  Considering the problems associated with LMB columns, I wouldn't say that they a "lot better" than what you were doing.

    If your system uses individual SQL or Windows logins - then I would agree that system versioned tables have an issue.  However, a lot of applications utilize a 'generic' login from the application services and include an application user/login to identify the person performing the work.

    In those situations, using a system versioned table and separate modified/updated columns would actually work quite well.  You would not need the dated columns and therefore wouldn't need to allow NULL values at all...which avoids the issue of expansive updates.  I also would not have these columns defined as varchar - instead they will be a FK to the application users table.

     

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeff Moden

    SSC Guru

    Points: 996863

    Totally agreed on the Modified_By and Created_By columns being fix length like an INT column with an FK but, if an application has a generic login, that would mean that the application would still have to provide the user ID.  That won't help if someone uses local DML to do an update... you'd still need a trigger.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".
    "If "pre-optimization" is the root of all evil, then what does the resulting no optimization lead to?"

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Rod at work

    SSC-Dedicated

    Points: 33402

    Jeff Moden wrote:

    It's (a separate table) is a better practice for multiple reasons especially if it can't be altered without all hell breaking loose.  The 4 columns you speak of (I refer to them collectively as "LMB columns" and "poor-man's auditing") doesn't keep a history and the Updated_By column is a source of severe fragmentation because almost everyone starts it off as a NULL and only populates it during or after a modification.  Temporal tables were a start in the right direction but they don't have a way to determine who or what did the modification auto-magically.

    And, as you know, properly updating that Modified_By and date columns should be done by a trigger.  AFTER triggers take a relatively long time and cause secondary updates and INSTEAD OF triggers are a whopping PITA.  I DO wish MS would come up with the true concept of a BEFORE trigger if for no other reason that to support "Poor-Man's auditing".

    I started the "discovery" process on other methods but the company decided that it would be too much of a change because a whole lot of people wrote code against the LMB columns that they started using 12 years ago. <headdesk>.

    Heh... and I guess "better" is seriously relative to what one is used to.  Considering the problems associated with LMB columns, I wouldn't say that they a "lot better" than what you were doing.

    Man, you aren't kidding about the huge changes that would be required, if we were to try and change all of those LMB columns (as you put it) to using separate audit tables. Every database I have access to, if its something that was done in house, they use those LMB columns. All of those applications and reports, that would have to be modified - ugh.

    I am involved in helping to create new databases and apps. I'll try to champion using separate audit, but I'm guessing they'll want to stay with the "poor-man's auditing", as you put it. It would be probably easier to keep them consistent with other systems already in place.

    Kindest Regards, Rod Connect with me on LinkedIn.

  • Rod at work

    SSC-Dedicated

    Points: 33402

    Jeff Moden wrote:

    Totally agreed on the Modified_By and Created_By columns being fix length like an INT column with an FK but, if an application has a generic login, that would mean that the application would still have to provide the user ID.  That won't help if someone uses local DML to do an update... you'd still need a trigger.

    That's something we haven't thought of. We're using NVARCHAR(50) for both Created_By and Modified_By. I like that idea better!

    Kindest Regards, Rod Connect with me on LinkedIn.

  • Mr. Brian Gale

    SSC-Insane

    Points: 23080

    To take a different approach, it depends on what you are trying to capture with the audit trail.  Do you care about who and when each and every change was made OR only the most recent change?

    One thing you could do to make the transition (presuming you go that route) a bit less painful is to rename the existing tables and create views with the old table names that bring things back how they are now.  That way no applications break, no reports break, and you can have the improved auditing trail and capture more information in the auditing such as previous values and current values.  Nice part about this too is you can build the view, auditing table and triggers all without breaking the application or reports, and it is just going to be a small performance hit on the table as you have more triggers on it.  Then when you are happy that the view works with the main table but uses the auditing table for the "modified_by" and "modified_on" columns, you can rename the table and the view and nobody should notice a thing.   I'd still do this on a test system; pick a table that you know an application reads and writes to, make the table into a view, and then run a few tests in the app.

    Pain in the butt if you have a lot of tables with those audit columns, but you can do the transition gradually as you have time and with minimal breaking.

  • Jeffrey Williams

    SSC Guru

    Points: 88607

    Jeff Moden wrote:

    Totally agreed on the Modified_By and Created_By columns being fix length like an INT column with an FK but, if an application has a generic login, that would mean that the application would still have to provide the user ID.  That won't help if someone uses local DML to do an update... you'd still need a trigger.

    But that comes down to security and access to the system.  If you have allowed users direct access to the tables to perform DML - then I would agree that you would want a trigger to force an update to any auditing based columns.

    However, if you are using an application based user - defined in an application user table and you have direct DML from users, you have a bigger problem.  All modifications to those tables should only be allowed through stored procedures and should require the application user ID as one of the parameters.

    Your trigger would have no access to the identity of the application user based on the SQL or Windows login.  You would have to make sure the system is able to track those users and tie them back to the application user somehow.  That would require - at a minimum - a cross-reference table between the SQL/Windows login and the application user table, but of course - as soon as you utilize an AD security group that becomes virtually impossible.

    Better to make sure you have the correct security model for that application - and limit access to update those tables only through stored procedures.

     

     

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

Viewing 8 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply