Worst Practice - Triggering External Events

  • Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/column

  • Hi Steve

    You raise a good point with the article.  I believe it all comes down to architecture; a framework that really spells some of the basic principals your applications will take to suffice some end requirements.  A classic example is what you described, which can be wrapperd into the framework rules for end-user notification of events - in a structured fashine. 

    This discussion also turns it head toward end-users and applications developers; the roles each should be playing in "solution definition".  Be very careful not to fall in the classic trap of end user dictating the solution.  Your example is a case in question I believe.  Here we have business owners tells us that we need to email xyz when something happens.  What tends to happen in the world of tech is the "no problem, we will write a trigger and.." sort of approach; rather, we should be asking more questions about the why and where-fors, only to find its a nasty business process issue that we are simply automating.  Tech can solve most problems these days in some fashine, we just need to make sure its the right problem.

    Just thinking aloud



    Chris Kempster
    Author of "SQL Server Backup, Recovery & Troubleshooting"
    Author of "SQL Server 2k for the Oracle DBA"

  • Good article. Must admit though I have only used triggers once to implement cascade deletes (I know problems, problems ) when I converted an Access database. Recently considered it as a method for recording a log of data updates.

    This is something I will bear in mind for the future, it's great to get info like this as it will make me stop and think that one solution is not the only one.

    I do wonder, that many people use triggers and I don't, that there is something wrong in the way I design databasea and build systems  or maybe I'm just lucky and don't have large complex systems

    Far away is close at hand in the images of elsewhere.

  • Ok... you've convinced me... I was implementing a trigger that would email if certain rules were met.

    I'm not saying that sending notifications to users (or running some other process) on data change is a bad idea. Although I think this is not the best way to handle things, that's a separate discussion. I'm focusing on the use of a trigger to drive these external events.

    I hope you're writing that discussion soon.... How else would you do it without getting in a tangled mess (in this case, I filter the INSERTED rows and do a EXEC(sqlstatement) based on matches in a trig_rules table -- It seems that it would be hard to pass off a copy of INSERTED that would be "safe" if the trigger were to fire multiple times quickly:sick

    Does the "if it fails, it gets rolled back" apply to an AFTER trigger?


  • I actually hadn't planned on the followup for "how to do this", but I've gotten a bunch of responses, so maybe I should.

    I've got a couple of systems that I've used and I'll takle one shortly. Basically it involves "staging" the data that needs to be sent out and then using the staging area as a basis for notifications.

  • Great article.  Writing a follow up - about how to "stage" the data and then scheduling external processing- would be awesome.  I've implemented it here but would love to here about someone elses methods.

    Another thing not to do in triggers...Don't use Linked Server!  I would consider linked SQL servers to be external, as they require a connection that can fail for reasons outside of SQLs control.  Instead, I use a staging table that get's moved to the linked server as a batch.  I've tested this and performance is bad enough when the network is fast; I'd hate to see it when the network is bogged down.

    An UNPROVEN problem I'm also having with triggers is when writing from one database to another on the same server.  For some reason I'm seeing a noticable performance degradation when I update a table in a different database.  I'm not sure this is not environment related, but I'm checking it out.


    Signature is NULL

  • If I have a database dependent external process then I usually use a trigger to insert records into a logging table or tables.

    For example, a new page gets published in a CMS so all subscribers should get an e-mail.

    The trigger captures the publishing process and writes the page id to my logging table.  End of trigger.

    An scheduled external process does a SELECT TOP n from my logging table and sends batches of e-mail.

    This achieves two things.

    • The logging table acts as a giant out-box.
    • The external process with its SELECT TOP n is set to send e-mail in batches that are small enough to be digested by the mail server.
  • An scheduled external process does a SELECT TOP n from my logging table and sends batches of e-mail.

    What does this?  Do you use a timed sql job, or do you have an external program (service?) running that queries the table every n seconds?



    PS.  Does all this staging stuff still make sense if the trigger is an after trigger?

    It looks like I'll need to run a report-generating program using xp_cmdshell (doh!), and I don't want it fubaring the database...

  • You can use whatever scheduling mechanism you want, really, all it has to do is make a database connection and execute a proc.  Personally, I usually use SQL jobs and DTS packages, as they're built in and pretty easy to use.

    It's usually a process that "wakes up" every X amount of time and checks for new records to be processed (in the table being updated by the trigger).  For example, you could send an email for each row, update a web page, whatever (DTS is a good way to do these sorts of things if you're not a SQL guru).

    This makes sense only in after triggers, really.  If the process needs to occur before the table with the trigger is updated then this won't work.  Rarely would an external process actually have to occur BEFORE the insert or update occurs.

    Signature is NULL

  • [From the article]

    Which rolls back your update. Which may result in an error to your users or worse, removing a bunch of data they updated and not correctly telling them that the transaction was rolled back. Especially if it was a significant amount of data entry. Or the person is the VP of Sales

    This makes sense only in after triggers, really.  If the process needs to occur before the table with the trigger is updated then this won't work.  Rarely would an external process actually have to occur BEFORE the insert or update occurs.

    Can an after trigger roll back data if it fails?  I thought the data was "there" by the time the trigger fires....  If it can't alter the data, is there any problem with this "Worst Practice"?


  • I was hoping to see more discussion on this issue as I have several instances where real-time e-mail notification is required.  Like Grasshopper, I assumed that triggers only fired after the table update is complete.  I have implemented the xp_sendmail solution within the trigger for now and I haven't had a problem after probably 1,000 e-mails.  Does someone have a better way when real-time is required? 

    Could someone elaborate on the staging method and can this be done real-time rather than batch?  Back in the old days, I used an HP3000 which had message files.  There was a resident process which continuously read these message files and executed the required action on each record.  It was a destructive read, so it acted much like a trigger does. 

    Just trying to learn a better way....


  • Great article Steve, and I put my hand up as someone who has in the past added a feature which sent an email based upon specific inserts, deletes & updates to specific tables... until there were issues with mail and it all went pear shaped...

    What I did in the end was still to send out emails, but not in side the transaction – this is the key point of your article. Basically I created another 'email monitoring' table, where if anything required an email being sent, instead the original trigger inserted a row into that table, with the details required to send the email. Another process using sql agent, monitored any rows in that table, and sent the emails accordingly, meaning that the email function was completely separate from the original transaction. Once the email was successfully sent, the row would be deleted so that only one email was sent. There’s nothing to stop you having a flag instead, if you need to keep a track of how many emails are being sent etc. Likewise the email sending agent only checked the table every quarter hour, but there may be little to stop you checking more frequently, depending on your circumstances.

    Nice one Steve (yet again)



  • "Could someone elaborate on the staging method and can this be done real-time rather than batch?"

    It can, but it would not be something from within SQL 2K.  As far as I know there is nothing in SQL that works similar to C# "delegates" or java "events".

    And, technically, "real-time" is impossible.  There is always latency, even if it is milliseconds.

    For simplicities sake you can still do that all in SQL.

    First, create a proc that:

    check if record exist in the email staging table

     A. YES

      I.   sends an email

      II.  archive the record

      III. goto 1)

     B. NO

      I.   Do nothing

    Then you have a few figure out the maximum allowable latency (3 seconds, say), and schedule the proc to run once every 3 seconds (in SQL Jobs).  Make sure the proc is highly optimized, and you'll be fine.  Connection overhead could be a problem for this, but I believe SQL Jobs does some connection pooling.

    Another option is to set up a controller proc that does a "wait for" step and executes the actual proc every 3 seconds.  However, this is prone to failure if the proc dies or it's connection is killed, so you need  a monitor for the monitor, which is rather messy.

    Signature is NULL

  • Steve,

    I am wondering if a lookup to another table in the trigger would be the same as sending an email. I am updating another table from inside the trigger, but in order to complete that I need to lookup the schedule of the record being changed. I am setting a variable equal to that lookup(Select Sched_No from table). If the schedule number is needed, do you feel this is still a bad practice in the trigger.

    If I include in the trigger as after update would that help in making sure the data does not get rolled back. Any help would be appreciated.



Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply