Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««123»»

Stored Procedure vs Triggers Expand / Collapse
Author
Message
Posted Friday, March 31, 2006 8:03 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Thursday, July 31, 2014 2:41 PM
Points: 89, Visits: 480
Something that has not been mentioned in all of this is the method by which applications are connecting to the SQL Server. If they are connecting through direct, trusted connections, then the trigger will have access to the username making the DML statement and auditing via the trigger will work fine. However, if applications connect through an abstraction layer such as COM+ running under its own user or a website, then the trigger will only have access to that middle-tier username (IUSR..., your COM+ user etc.) and it will make the audit data meaningless. In this last scenario, you have to pass the username from the presentation tier down to the database and in that scenario stored procedures make more sense.

I generally try to avoid triggers when I can. The ugliest part of triggers is that they fire for everyone all the time which can be a pain in administrative situations. If auditing is really the driving force and if you have access to the end user connecting to SQL (which presumably you would if triggers were an option) then I would recommend finding a program that monitors the DB log directly. That eliminates the use of triggers and provides your audit trail.

Post #270175
Posted Friday, March 31, 2006 8:03 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, April 8, 2011 3:18 PM
Points: 2, Visits: 13
Ondrej Pilar got my vote on his comments.
Post #270176
Posted Friday, March 31, 2006 8:04 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, April 8, 2011 3:18 PM
Points: 2, Visits: 13
Ondrej Pilar got my vote on his comments.
Post #270178
Posted Friday, March 31, 2006 8:22 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Monday, April 14, 2014 4:18 PM
Points: 1,276, Visits: 1,132

Another aspect that is often overlooked (maybe because it's not a requirement for a majority of people...) is auditing SELECT statements.  Not necessarily an easy task, but I have run into read/SELECT auditing requirements in the past.  But anyway...

Arun Marathe gives some very good info on query plan recompilations at http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx.  This information is quoted in part from that article:

Trigger recompilations happen when SQL Server determines that a trigger execution will result in the 'inserted' or 'deleted' tables having "significantly different" (or sufficiently different) rowcounts.  In SQL 2000, a "significant difference" is determined by the formula:

ABS(log10(n+5) - log10(m+5)) >= 1

Where n is the rowcount of the 'inserted' or 'deleted' table in the cached query plan, and m is the rowcount for the current trigger execution.

For SQL 2005, the formula was changed slightly.  If you're going to end up with more rows than are in the cached query plan after trigger execution (m > n):

ABS(log10(n) - log10(m)) > 1

If you'll end up with the same or less rows after trigger execution (m <= n):

ABS(log10(n) - log10(m)) > 2.1

So, for SQL 2000, if you have a cached trigger query execution plan with a rowcount of 100, the query will (theoretically) remain cached until you fire an instance of the trigger that generates a rowcount between 0 and 10 or 1,045+.

For SQL 2005, a cached trigger query execution plan with a rowcount of 100 will remain cached until you fire an instance of the trigger that generates a rowcount of 1,001 or more.

At the opposite end of the scale, on SQL 2005, if your cached trigger query execution plan has a rowcount of 1,000, it will remain cached until you fire an instance of the trigger that generates a rowcount of 7 or less.

Post #270187
Posted Friday, March 31, 2006 10:24 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Tuesday, September 11, 2007 5:42 PM
Points: 46, Visits: 1
Perhaps I just missed it in the article, I admit i just skimmed it, but no one has mentioned using AFTER triggers for auditing data. e.g.:
CREATE TRIGGER AuditStuff ON DataTable AFTER INSERT, UPDATE AS
<Do stuff here>
While you will still have the cost of running a trigger every time a record is changed, it will at least not hold up the transaction to do it.
Overall, I would tend towards using triggers to audit data, as it will catch updates which are not done through the client application. e.g. Ad hoc changes via Management Studio. In the end, Auditing is going to slow a system down, it's an extra transaction which has to be done per transaction, if it is slowing things down, consider either changing what you are auditing, or throw hardware at it (yes, yes, throwing money at a problem is not usually a good idea).
Post #270243
Posted Friday, March 31, 2006 10:26 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Monday, July 10, 2006 9:50 AM
Points: 17, Visits: 1
I think the conclusion that case 3 is prefereable is flawed for the architectural reason. Number one rule or OO design is "Program against interfaces, not objects". This approach should also be applied to database sesign as well. Views are "interfaces" and tables are "objects". Without triggers this would not be possible.

Another downside to using stored procedures instead of views&triggers is that they do not provide enough flexibility. If I were to create a view with INSTEAD OF trigger which does the actual insert, I would be able to use it for individual row inserts or many row inserts. spAddDummyTable1 allows for individual inserts only! If you were required to insert muiltiple rows, you'd have to write another SP or keep calling spAddDummyTable1 for each insert. The former approach duplicates functionality which is bad and the latter causes really bad performance.

Let me offer a solution that uses OO approach while maintaining flexibility and performance:

-- the following two views are INTERFACES to code against
CREATE VIEW IDummy
AS
SELECT * FROM DummyTable1
GO

CREATE VIEW IDummyAudit
AS
SELECT * FROM DummyTable1_Audit
GO

-- the following triggers implement actual inserts
CREATE TRIGGER IDummy_insert ON IDummy INSTEAD OF INSERT
AS
INSERT DummyTable1 SELECT * FROM inserted
INSERT IDummyAudit SELECT *,'N' from inserted
GO

CREATE TRIGGER IDummyAudit_insert ON IDummyAudit INSTEAD OF INSERT
AS
INSERT DummyTable1_Audit SELECT * FROM inserted
GO

Now, you can code against interfaces(IDummy and IDummyAudit) instead of actual table objects while hiding functional code in the trigger. You can also use INSERT's and SP's, single or multi-row inserts freely.

NOTE: For the purposes of brevity, I've used * instead of explicit column listing. It's a good coding practice to list columns explicitly.
Post #270245
Posted Friday, March 31, 2006 11:08 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Yesterday @ 3:01 PM
Points: 1,323, Visits: 794

In the conclusion, the author mentions the number of db objects required to support auditing in the two latter methods (900 and 450, respectively for a 150-table db). The implication seems to be that fewer objects = better (performance? maintenance?).

This kind of thinking, however, usually leads to bad db design... "fewer tables must be better, so let's use a single 'lookup' table for all our codes", or "let's denormalize so we don't have to do all those joins".

When making design decisions, be sure you know exactly the performance metrics you're making tradeoffs for. "Number of db objects" should be at or near the bottom of any such list.

TroyK




Post #270257
Posted Monday, April 3, 2006 10:45 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Thursday, March 13, 2014 10:08 AM
Points: 69, Visits: 19
First, I'd like to thank G. Vijayakumar for a very thoughtful analysis and comparison on the operational differences between stored procedures and triggers.

Next, I'd like to remind all DBAs that in today's world (at least in the US), you not only have to consider the maintenance issues surrounding your database design, you have to consider the business issues, specifically compliance.

Having done my share of stored procedure & trigger coding, I can readily empathize with G. Vijayakumar on the difference between maintaining 450 versus 900 db code objects.

However, when I consider the compliance requirements of an audit trail, I have to ask which is the better approach: a stored procedure that modifies both the data table and the audit table, which can be altered at any time by someone who has ALTER PROCEDURE permissions, or a stored procedure which modifies the data table and then triggers a corresponding modification to the audit table, with trigger code that can be secured separately from the stored procedure code.

Let's say that G Vijayakumar's #3 plan was in place -- modify both user data and audit table via the same stored proc. What would happen if someone who wasn't fully aware of the audit requirements were to create a stored procedure that modified the user data, but failed to modify the corresponding audit table? G Vijayakumar's solution #3 depends on use rules -- whoever writes a stored proc that's going to modify user data has to remember to also modify the corresponding audit table(s). That's dangerous, because use rules are always violated -- not intentionally, usually accidentally, but when they're violated, they're violated and you're missing data out of the audit tables. Compliance auditors are not going to be happy about this, and the senior DBA is going to take a hit to his or her reputation, perhaps to the point of not being trusted with corporate data... not a good scenario.

That's my perspective... so I'm leaning towards solution #2, separating stored proc from trigger, so that 1) I can manage the CREATE and ALTER permissions on both, and 2) I can be assured that on ANY modification to the user data there'll be a record written to the audit tables.

Thanks again for a very useful article!



Post #270608
Posted Monday, April 3, 2006 12:45 PM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Thursday, July 31, 2014 2:41 PM
Points: 89, Visits: 480

If that level of auditing were required, then I would recommend using a program that snifs the actual transaction log. That avoids both the ugliness of triggers and the insecurity of developers with ALTER PROCEDCURE permissions having access to the audit code. Of course, in order to achieve accurate data, users would need direct access to the tables and that creates a host of other security issues. Further, if you cannot trust your DBAs/database developers with ALTER TRIGGER but can trust them with ALTER PROCEDURE, then I think there are other issues at hand.

If you do not want users to have direct access to your tables, then I submit that you can't use triggers as the user affecting the change will not be available and that leaves putting the audit code in the stored procedures.

Post #270661
Posted Monday, April 3, 2006 12:56 PM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Thursday, July 31, 2014 2:41 PM
Points: 89, Visits: 480

> If you do not want users to have direct access to your tables
> , then I submit that you can't use triggers as the user affecting
> the change will not be available and that leaves putting the audit
> code in the stored procedures.

Actually, I realize I was not clear here. If you have connection pooling amongst users and the users are not using Windows Auth to connect directly to SQL server *then* the user would not be available. If users are using Windows Auth but going through stored procs (and not going through a proxy user like e.g. COM+) then of course the username would be available.

Post #270664
« Prev Topic | Next Topic »

Add to briefcase ««123»»

Permissions Expand / Collapse