Adding multiple triggers to same table - Recommendations/Best Practices

  • Hi Folks,

    I have a requirement to add additional triggers(INSERT/UPDATE) for a table which already have INSERT/UPDATE/DELETE triggers. This is for the purpose for adding audits columns, basically wanted to update these columns when the row was last inserted or updated and who. What is the down side of adding separate multiple INSERT triggers or multiple UPDATE triggers on the same table. What are the best practices or recommended approach for this.
    I see trigger execution order can be set with sp_settriggerorder stored proc.

    Please share your valuable thoughts on this.

    Many thanks!

  • SQL!$@w$0ME - Friday, February 16, 2018 8:57 AM

    Hi Folks,

    I have a requirement to add additional triggers(INSERT/UPDATE) for a table which already have INSERT/UPDATE/DELETE triggers. This is for the purpose for adding audits columns, basically wanted to update these columns when the row was last inserted or updated and who. What is the down side of adding separate multiple INSERT triggers or multiple UPDATE triggers on the same table. What are the best practices or recommended approach for this.
    I see trigger execution order can be set with sp_settriggerorder stored proc.

    Please share your valuable thoughts on this.

    Many thanks!

    sp_settriggerorder allows you to set First and Last so if you have more than two, you don't necessarily control the order.
    Personally, I avoid triggers whenever I can.

    Sue

  • Sue_H - Friday, February 16, 2018 11:02 AM

    SQL!$@w$0ME - Friday, February 16, 2018 8:57 AM

    Hi Folks,

    I have a requirement to add additional triggers(INSERT/UPDATE) for a table which already have INSERT/UPDATE/DELETE triggers. This is for the purpose for adding audits columns, basically wanted to update these columns when the row was last inserted or updated and who. What is the down side of adding separate multiple INSERT triggers or multiple UPDATE triggers on the same table. What are the best practices or recommended approach for this.
    I see trigger execution order can be set with sp_settriggerorder stored proc.

    Please share your valuable thoughts on this.

    Many thanks!

    sp_settriggerorder allows you to set First and Last so if you have more than two, you don't necessarily control the order.
    Personally, I avoid triggers whenever I can.

    Sue

    +1 avoid triggers as far as possible, there can be other ways of achieving your goal.

    ...

  • I guess I'd just combine trigger actions and keep the actual number of triggers to a minimum.

    As for avoiding triggers, I think that too many people make the recommendation to avoid them because too many people have written poorly performing queries and the old wive's tales about such catastrophes abound. Properly written triggers have nearly zero impact on performance, especially properly written audit triggers.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks for positive response Jeff.

    Since the audit triggers are going to implement for multiple tables(over 500), is there a negative impact on adding as a new trigger vs modifying existing. It would be easier to add on multiple tables. Any easy method to implement this.

    Thanks!

  • SQL!$@w$0ME - Sunday, February 18, 2018 6:10 PM

    Thanks for positive response Jeff.Since the audit triggers are going to implement for multiple tables(over 500), is there a negative impact on adding as a new trigger vs modifying existing. It would be easier to add on multiple tables. Any easy method to implement this.Thanks!

    It's a very rare thing to have to audit that many tables.  Why is there such a need?

    There are built in tools for doing such things rather than writing homegrown ones.  The answer the question of "Why is there such a need" will help define the actual auditing method you might want to use.

    Also, even good smart people make horrible mistakes when writing their own auditing systems.  For example, (and I don't use the word often), it's actually fairly stupid to use the INSERTED logical table in audit triggers because it immediately duplicates the data in the table for no gain (only changes should be audited because the originals always exist in the original table until changed).  It's also a huge waste of space to do whole-row auditing with, say, a 140 column wide table when it's usual to only update 4 or 5 columns of such a table.  Of course, column level audits also bring their own pain when trying to reassemble "what happened" for audit purposes.  Obviously, one size does not fit all.

    Then there's the problem of isolation and management.  For example, audit tables are frequently the largest tables to be found in any database.  If I were to remove all of the audit tables in my primary server, it would be a 250GB server instead of an insane 2GB server.  You have to consider things like will copies of the production database ever be required and, if so, is it necessary to also copy all that audit junk?  If not then you need to store all that audit junk in a different database and you need to plan on partitioning the larger tables because it's stupid to have to backup several years of audit data that will never ever change.  Partitioning will allow you to keep the older months in compacted and (possibly) compressed file groups that are also READ_ONLY so that you don't have to back them up with every bloody full backup.  It will also allow you to do a "Get back in business" restore much more quickly thanks to "Piece Meal" restores, etc, etc.

    Again, auditing 500+ tables is a bit of an extraordinary requirement and will require many disciplines or you'll need a 10 TB server to house just 1TB of data, the rest being dedicated to audit tables.  This is going to take some planning but we first need to know why there's such a broad sweeping audit requirement.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I'll also add that you need to know how long you need to keep the audit data online, on disk, and on tape.  If you can ever delete audit data, then you MUST plan ahead for how to quickly delete the data instead of relegating yourself to deleting millions of rows using a bloody batch delete loop.  Partitioning (either Partitioned Tables or Partitioned Views) can aid tremendously there but if you wait until such a thing is needed, you'll be in a world of hurt.

    Again, there needs to be a very good solid business reason to audit 500+ tables.  If it's just someone's notion to "audit everything" without a real reason, then they need to have a dose of reality explained to them and be made to understand the true costs associated with doing such a thing for both the short term and the long term.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Ah... one more thing.  If anyone suggests the use of "portable/universal CLR triggers", send them from the room because they have no clue what a horrible impact that will have on performance.  I do because I had to fix such things where I work about 4 years ago.  It was taking about 4 minutes to update just 4 columns of 10,000 rows on a 140 column table.  Everyone was blaming the server, the "pipe", the disks and everything else except for the stupid way the triggers had been built.  Everyone thought "It's a CLR so it must be fast, right"?  I created code that would generate hardcoded triggers based on the structure of the table and, without optimization, got that evolution down to under 800 milliseconds (which is still slow but good enough, in this case because the rest of their code still sucks).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks Jeff.

    Primary function of these audit columns is to help load data from oltp to data warehouse using insert/update audit columns using ssis. Thought about cdc, but they find this technically not feasible for these many tables or more work involved. Hence preferred triggers.

    thanks!

  • As for avoiding triggers, I think that too many people make the recommendation to avoid them because too many people have written poorly performing queries and the old wive's tales about such catastrophes abound. Properly written triggers have nearly zero impact on performance, especially properly written audit triggers.

    The triggers that I do write are properly written and have no significant impact on performance.  That said, it remains that too many people use a trigger when another method would be more appropriate.  The main two uses for which I advocate triggers are auditing (as you pointed out) and for maintaining low level data integrity rules that can't be enforced by the standard referential integrity tools such as primary keys, foreign keys, check constraints etc.  As example of the second one would be a tables that has an active/inactive flag that has a child table with a similar flag.  If the parent flag is set to inactive, I will use a trigger to set all the children records to inactive.  Conversely, if a child record is reactivated, I will use a trigger to ensure the parent is reactivated if necessary.

    I could be convinced that there are other uses.  But I've seen the misapplication of triggers when there were better ways cause issues.  So for now, any other use of a trigger for a reason other than the above would warrant a careful review.

    //edited the add by the standard referential integrity tools such as primary keys, foreign keys, check constraints etc to clarify the idea.

  • RonKyle - Monday, February 19, 2018 7:07 AM

    As for avoiding triggers, I think that too many people make the recommendation to avoid them because too many people have written poorly performing queries and the old wive's tales about such catastrophes abound. Properly written triggers have nearly zero impact on performance, especially properly written audit triggers.

    The triggers that I do write are properly written and have no significant impact on performance.  That said, it remains that too many people use a trigger when another method would be more appropriate.  The main two uses for which I advocate triggers are auditing (as you pointed out) and for maintaining low level data integrity rules that can't be enforced on the application.  As example of the second one would be a tables that has an active/inactive flag that has a child table with a similar flag.  If the parent flag is set to inactive, I will use a trigger to set all the children records to inactive.  Conversely, if a child record is reactivated, I will use a trigger to ensure the parent is reactivated if necessary.

    I could be convinced that there are other uses.  But I've seen the misapplication of triggers when there were better ways cause issues.  So for now, any other use of a trigger for a reason other than the above would warrant a careful review.

    Totally agreed.  The things you mention are good uses of triggers.  As for what other people may use a trigger for, I'm thinking that the purpose of this thread may have a problem where another method would be much more appropriate.  Can't spend much time on it until after work tonight.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks Ron!

  • SQL!$@w$0ME - Monday, February 19, 2018 12:15 AM

    Thanks Jeff. Primary function of these audit columns is to help load data from oltp to data warehouse using insert/update audit columns using ssis. Thought about cdc, but they find this technically not feasible for these many tables or more work involved. Hence preferred triggers.thanks!

    Have you considered using one of the forms of replication? We use transactional replication for this same requirement. You decide which tables & columns are relevant, if you want a point-in-time you can snapshot the subscriber...what's not to like?

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • SQL!$@w$0ME - Monday, February 19, 2018 12:15 AM

    Thanks Jeff. Primary function of these audit columns is to help load data from oltp to data warehouse using insert/update audit columns using ssis. Thought about cdc, but they find this technically not feasible for these many tables or more work involved. Hence preferred triggers.thanks!

    As I understand things, you are going to create "copies" of changed data to audit tables with triggers, and then use these audit tables to populate a data warehouse with SSIS?

    Is the data in the data warehouse loaded daily? Hourly? Something else? 
    Are there transforms occurring?

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Primary function of these audit columns is to help load data from oltp to data warehouse using insert/update audit columns using ssis. Thought about cdc, but they find this technically not feasible for these many tables or more work involved. Hence preferred triggers.

    If this is the case, you should just need these audit columns on your fact table entities.  Although the entities that form your dimensions could also benefit from them, you could always determine which dimension keys are missing once your main data is in the staging area.  That's how I do it for the system that has added/last updated columns on the main tables but lacks them on some of the description tables (eg client).

Viewing 15 posts - 1 through 15 (of 37 total)

You must be logged in to reply to this topic. Login to reply