Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/gvijayakumar/storedprocedurevstriggers.asp.
This article is practical if you are only interested performance. In most application(s), when you implement audit trails, you would capture the following:
1) Why the data was modified
2) When the data was modified
3) Who modified the data.
If somebody has access to data via Enterprise Manager, he/she will able be to manipulate the data.This action will not provide any tracebility or audit trail. If you dealing with very sensitive data, then you have to capture any kind of action related to that data based on the business rules regardless of the tools (QA - Query Analyzer, EM - Enterprise Manager or via Application).
Hence, I doubt the method you suggested/proposed is practical.
One Orange Chip
The trigger method seems to me to be the most reliable method for capturing audit data. I have used "INSTEAD OF" triggers to maintain "last modified by" fields to prevent users from overriding the value even with Enterprise Manager (unless they have knowlege & permissions to disable the trigger).
The stored-procedure method works as long as you make sure developers can't or won't write anything that updates the table except through the stored procedure.
The article shows some good examples of evaluating performance by examining the syscacheobjects table. However, none of this examination appeared to apply to the topic. You called either a trigger, a stored procedure, or both, four times. Am I missing something?
Why would your method III use an insert-select for the audit table instead of the original parameters, like this?
Insert Into DummyTable1_Audit Values (@EmpID, Replicate (@EmpName,20), 'N')
Another gripe would be your use of "SELECT *" and INSERT with no field list. Any change to the structure of the table would kill both your trigger and stored procedure. This has nothing to do with trigger vs. stored procedures, but example code that might be seen by fledgling DBAs should not teach them bad habits.
There are A LOT of things left out here:
1. If your stored procedures are supposed to carry out the audit trail capabilities you will be MIXING Audit Logic with Business like logic with in my opinion is a BAD thing, complicates the procedures and Makes it Harder to further modify.
2. A trigger would fire ONCE per STATEMENT with that in mind if you need to load data of several records at a time stored procedures will be a bad idea from the performance stand point if you compare it with statements like:
UNION ALL SELECT fld1,fld2,...
UNION ALL SELECT ....
3. In Triggers is a lot easier to tune to the field level change in case you decide that audit should be performed only if certain column(s) change due to the built in UPDATED() facility
4. Table structure does not necessarily has to be with an Identical Mirrored Table you could use
Create Table ( id int , TranasctionType char(1), tableName varchar(10), Pkey varchar(10), FieldName varchar(40), Oldvalue varchar(200), NewValue varchar(200)  Simplifying A LOT the Storage and Flexibility of the audited Solution. Try that with Procedures mixing the logic and getting fairly good results
5. IF any of the Tables Has the CASCADE DELETE ON, the sp will be BLIND to child tables modification or if you try to compensate for that it will become increasingly difficult to code as the hierarchy gets deeper, which won't happen in the case of the Triggers
6. Scriptability of Audit logic can be achieved by both but I have found fairly common that some procedures are usually tuned independently and that is another reason to keep audit logic separated
All in all the explanation of syscaheobject was well presented and it could be a nice starting point to those who want to learn about performance. The intention of my explanation here is just to demonstrate that there is NOT a clear cut to one approach or the other as the article seems to point out and to shed some light on things that were NOT even mentioned in favor of the chosen solution.
Mr or Mrs. 500
As for the article, I was expecting to see some insight on runtime performance using triggers vs SPs. For instance, from my own experience I found that using INSERTED from triggers takes a lot of time, and I assume that using SPs for the audit work would be better on performance. I've never had the time to do extensive research on this subject, and I guess I was expecting to see some in this article. pitty..
As far as the recommendations go, I must agree with noeld:
Either way, I was glad to see the article and it was interesting to read - learned alot about caching etc., which is always good to know.
its good information. thanks!!
Triggers serve best as methods to maintain low level data integrity that cannot be done through standard relational methods. The inactivation of a major site, for example, should via trigger cascade the inactiviation of all the subsites. Audit records are some form of this low level integrity. The changes need to cascade into some form of permanent record of the change.
Stored procedures perform data actions. The can store business logic but ideally this is performed in a middle tier. They should not normally be used in the same way a trigger does.
We would have never had the debate. Unless someone could show that there were significant performance reasons to do otherwise, the debate that was held as the beginning of the article would not have gone on long: we audit via trigger.
That it's also a good idea for all the reasons the previous posters have mentioned is just gravy on the overall conclusion.
If I remember correctly doesn't an update or insert trigger force all data updates to be deferred updates rather than in place updates, thus contributing to page splits and fragmentation?
I've also always found triggers slow an oltp application badly - well it did last time I saw it implemented ( briefly - it had to be removed to return the application to usability ).
Have to be honest I avoid triggers unless there is absolutely no alternative.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
This reply seems to be MUCH more valuable than the original topic...
Ondrej Pilar got my vote on his comments.
Right there with Babe
Something that has not been mentioned in all of this is the method by which applications are connecting to the SQL Server. If they are connecting through direct, trusted connections, then the trigger will have access to the username making the DML statement and auditing via the trigger will work fine. However, if applications connect through an abstraction layer such as COM+ running under its own user or a website, then the trigger will only have access to that middle-tier username (IUSR..., your COM+ user etc.) and it will make the audit data meaningless. In this last scenario, you have to pass the username from the presentation tier down to the database and in that scenario stored procedures make more sense.
I generally try to avoid triggers when I can. The ugliest part of triggers is that they fire for everyone all the time which can be a pain in administrative situations. If auditing is really the driving force and if you have access to the end user connecting to SQL (which presumably you would if triggers were an option) then I would recommend finding a program that monitors the DB log directly. That eliminates the use of triggers and provides your audit trail.
Ondrej Pilar got my vote on his comments.
Another aspect that is often overlooked (maybe because it's not a requirement for a majority of people...) is auditing SELECT statements. Not necessarily an easy task, but I have run into read/SELECT auditing requirements in the past. But anyway...
Arun Marathe gives some very good info on query plan recompilations at http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx. This information is quoted in part from that article:
Trigger recompilations happen when SQL Server determines that a trigger execution will result in the 'inserted' or 'deleted' tables having "significantly different" (or sufficiently different) rowcounts. In SQL 2000, a "significant difference" is determined by the formula:
ABS(log10(n+5) - log10(m+5)) >= 1
Where n is the rowcount of the 'inserted' or 'deleted' table in the cached query plan, and m is the rowcount for the current trigger execution.
For SQL 2005, the formula was changed slightly. If you're going to end up with more rows than are in the cached query plan after trigger execution (m > n):
ABS(log10(n) - log10(m)) > 1
If you'll end up with the same or less rows after trigger execution (m <= n):
ABS(log10(n) - log10(m)) > 2.1
So, for SQL 2000, if you have a cached trigger query execution plan with a rowcount of 100, the query will (theoretically) remain cached until you fire an instance of the trigger that generates a rowcount between 0 and 10 or 1,045+.
For SQL 2005, a cached trigger query execution plan with a rowcount of 100 will remain cached until you fire an instance of the trigger that generates a rowcount of 1,001 or more.
At the opposite end of the scale, on SQL 2005, if your cached trigger query execution plan has a rowcount of 1,000, it will remain cached until you fire an instance of the trigger that generates a rowcount of 7 or less.
Perhaps I just missed it in the article, I admit i just skimmed it, but no one has mentioned using AFTER triggers for auditing data. e.g.:
CREATE TRIGGER AuditStuff ON DataTable AFTER INSERT, UPDATE AS
<Do stuff here>
While you will still have the cost of running a trigger every time a record is changed, it will at least not hold up the transaction to do it.
Overall, I would tend towards using triggers to audit data, as it will catch updates which are not done through the client application. e.g. Ad hoc changes via Management Studio. In the end, Auditing is going to slow a system down, it's an extra transaction which has to be done per transaction, if it is slowing things down, consider either changing what you are auditing, or throw hardware at it (yes, yes, throwing money at a problem is not usually a good idea).
I think the conclusion that case 3 is prefereable is flawed for the architectural reason. Number one rule or OO design is "Program against interfaces, not objects". This approach should also be applied to database sesign as well. Views are "interfaces" and tables are "objects". Without triggers this would not be possible.
Another downside to using stored procedures instead of views&triggers is that they do not provide enough flexibility. If I were to create a view with INSTEAD OF trigger which does the actual insert, I would be able to use it for individual row inserts or many row inserts. spAddDummyTable1 allows for individual inserts only! If you were required to insert muiltiple rows, you'd have to write another SP or keep calling spAddDummyTable1 for each insert. The former approach duplicates functionality which is bad and the latter causes really bad performance.
Let me offer a solution that uses OO approach while maintaining flexibility and performance:
-- the following two views are INTERFACES to code against
CREATE VIEW IDummy
SELECT * FROM DummyTable1
CREATE VIEW IDummyAudit
SELECT * FROM DummyTable1_Audit
-- the following triggers implement actual inserts
CREATE TRIGGER IDummy_insert ON IDummy INSTEAD OF INSERT
INSERT DummyTable1 SELECT * FROM inserted
INSERT IDummyAudit SELECT *,'N' from inserted
CREATE TRIGGER IDummyAudit_insert ON IDummyAudit INSTEAD OF INSERT
INSERT DummyTable1_Audit SELECT * FROM inserted
Now, you can code against interfaces(IDummy and IDummyAudit) instead of actual table objects while hiding functional code in the trigger. You can also use INSERT's and SP's, single or multi-row inserts freely.
NOTE: For the purposes of brevity, I've used * instead of explicit column listing. It's a good coding practice to list columns explicitly.
Viewing 15 posts - 1 through 15 (of 27 total)