Nice article thanks!
I've made that mistake before and seen others make it. It can be tricky to detect.
A colleague told me of a nifty trick today.
When releasing a trigger into the wild ensure all logic is encapsulated in a TRY / CATCH.
In the catch, call sp_trace_generateevent and monitor any problems through profiler or extended events. (User configurable event).
Handy information can be added e.g. where in the trigger the error occurred and what the error was.
Also when altering a trigger, keep the original trigger code handy to roll it back if needed. This is assuming the original trigger wasn't causing problems.
Hi Dennis. There are a few of points here that I am unclear on:
1) Wouldn't encapsulating all modifications (i.e. across all, or almost all, objects) in TRY / CATCH be preferred? Why make it specific to triggers?
2) What do you mean by "releasing into the wild"?
3) What is nifty about raising a user-configurable event? Don't you have to be running a trace / XE session at that time in order to see it? Or am I misunderstanding how that works? Wouldn't simply logging the error to an ErrorLog table along with ERROR_NUMBER(), ERROR_LINE(), ERROR_MESSAGE(), etc not only give you more info, but also allow for seeing errors that occur when nobody is watching?
4) And regarding the advice to keep the old version of the trigger around when altering, isn't this the assumed practice for all objects anyway? Are you suggesting that the DB objects (tables, indexes, constraints, procs, functions, views, etc) are not all in a version control system already? If not then they certainly should be, and not just the triggers.