I generally try to avoid triggers when I can. The ugliest part of triggers is that they fire for everyone all the time which can be a pain in administrative situations. If auditing is really the driving force and if you have access to the end user connecting to SQL (which presumably you would if triggers were an option) then I would recommend finding a program that monitors the DB log directly. That eliminates the use of triggers and provides your audit trail.
Another aspect that is often overlooked (maybe because it's not a requirement for a majority of people...) is auditing SELECT statements. Not necessarily an easy task, but I have run into read/SELECT auditing requirements in the past. But anyway...
Arun Marathe gives some very good info on query plan recompilations at http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx. This information is quoted in part from that article:
Trigger recompilations happen when SQL Server determines that a trigger execution will result in the 'inserted' or 'deleted' tables having "significantly different" (or sufficiently different) rowcounts. In SQL 2000, a "significant difference" is determined by the formula:
ABS(log10(n+5) - log10(m+5)) >= 1
Where n is the rowcount of the 'inserted' or 'deleted' table in the cached query plan, and m is the rowcount for the current trigger execution.
For SQL 2005, the formula was changed slightly. If you're going to end up with more rows than are in the cached query plan after trigger execution (m > n):
ABS(log10(n) - log10(m)) > 1
If you'll end up with the same or less rows after trigger execution (m <= n):
ABS(log10(n) - log10(m)) > 2.1
So, for SQL 2000, if you have a cached trigger query execution plan with a rowcount of 100, the query will (theoretically) remain cached until you fire an instance of the trigger that generates a rowcount between 0 and 10 or 1,045+.
For SQL 2005, a cached trigger query execution plan with a rowcount of 100 will remain cached until you fire an instance of the trigger that generates a rowcount of 1,001 or more.
At the opposite end of the scale, on SQL 2005, if your cached trigger query execution plan has a rowcount of 1,000, it will remain cached until you fire an instance of the trigger that generates a rowcount of 7 or less.
In the conclusion, the author mentions the number of db objects required to support auditing in the two latter methods (900 and 450, respectively for a 150-table db). The implication seems to be that fewer objects = better (performance? maintenance?).
This kind of thinking, however, usually leads to bad db design... "fewer tables must be better, so let's use a single 'lookup' table for all our codes", or "let's denormalize so we don't have to do all those joins".
When making design decisions, be sure you know exactly the performance metrics you're making tradeoffs for. "Number of db objects" should be at or near the bottom of any such list.
If that level of auditing were required, then I would recommend using a program that snifs the actual transaction log. That avoids both the ugliness of triggers and the insecurity of developers with ALTER PROCEDCURE permissions having access to the audit code. Of course, in order to achieve accurate data, users would need direct access to the tables and that creates a host of other security issues. Further, if you cannot trust your DBAs/database developers with ALTER TRIGGER but can trust them with ALTER PROCEDURE, then I think there are other issues at hand.
If you do not want users to have direct access to your tables, then I submit that you can't use triggers as the user affecting the change will not be available and that leaves putting the audit code in the stored procedures.
> If you do not want users to have direct access to your tables> , then I submit that you can't use triggers as the user affecting > the change will not be available and that leaves putting the audit > code in the stored procedures.
Actually, I realize I was not clear here. If you have connection pooling amongst users and the users are not using Windows Auth to connect directly to SQL server *then* the user would not be available. If users are using Windows Auth but going through stored procs (and not going through a proxy user like e.g. COM+) then of course the username would be available.