Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 123»»»

Access variables values from Trigger Expand / Collapse
Author
Message
Posted Tuesday, June 8, 2010 10:46 PM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Tuesday, November 11, 2014 10:12 PM
Points: 51, Visits: 185
Comments posted to this topic are about the item Access variables values from Trigger
Post #934382
Posted Wednesday, June 9, 2010 12:14 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, November 14, 2014 2:24 PM
Points: 8, Visits: 172
That's a good idea. This will work for most of cases but not for replication. If a table is published in a merge publication, while replication agent is modifying the data, the connection of the agent will use context info as part of the identification of merge agent. In this case, the context info passed in the trigger will not be xml conversable.


Post #934414
Posted Wednesday, June 9, 2010 2:26 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Wednesday, December 4, 2013 8:02 AM
Points: 337, Visits: 158
An alternative is to use a temp table created by the proc which will be visible to the trigger
Post #934446
Posted Wednesday, June 9, 2010 2:35 AM
SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Yesterday @ 7:07 AM
Points: 988, Visits: 814
A very good article to show how context_info works and how it can be applied.

If the 128 bytes of context_info is insufficient for the purpose, a CLR could be used to first register persistent, session related context information, then retrieve it.

I'm not so sure the auditing function should rely on cooperation from the process that is deleting the rows though
Post #934452
Posted Wednesday, June 9, 2010 4:07 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Friday, November 21, 2014 3:32 AM
Points: 55, Visits: 442
Nice article, and interesting use of CONTEXT_INFO.

Be careful that nothing else depends on CONTEXT_INFO though! (Overwriting CONTEXT_INFO could have unintended impact on other routines that rely on session information stored there.)


While not suitable for deletions or for minimally-invasive auditing, another approach to passing in data to a trigger is to use an INSTEAD OF trigger on a view:

Create a view that returns all the table columns, plus adds an additional "parameter" column or columns.

Then you can create an INSTEAD OF INSERT trigger (or INSTEAD OF UPDATE) on the view.

This allows you to perform the insert into the view (instead of the underlying table), providing values for the "parameter" column--and the trigger has the ability to read the "parameter" column.

(To clarify, the "parameter" column doesn't really get stored anywhere or returned by the view: it is in the view solely to allow a parameter value to be passed in on updates or inserts.)

This approach assumes SQL 2005, but provides the benefit of efficiently passing in parameters for each row (as opposed to per-execution), and allows the parameter(s) to be of any data type.

This approach doesn't do any good for deletes, and is probably not ideal for minimally-invasive passive auditing...but I've used it extensively with great success.

Post #934495
Posted Wednesday, June 9, 2010 4:42 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, June 9, 2010 10:38 PM
Points: 1, Visits: 2
nice article
Post #934515
Posted Wednesday, June 9, 2010 4:46 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Yesterday @ 5:23 AM
Points: 1,409, Visits: 1,316
Nice idea. But we have context info already used for other data. It is possible to this data into a variable, use this trick and restore them. I'm afraid of some subprocess that will want the original.



See, understand, learn, try, use efficient
© Dr.Plch
Post #934519
Posted Wednesday, June 9, 2010 5:24 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Friday, November 22, 2013 2:23 PM
Points: 245, Visits: 60
A nice idea, but your trigger relies on the fact that you would only ever delete one product at a time. You'd have to be very careful about other procedures which perform deletes and may not fill the context_info.

A global temporary table, or even a persistent table with a key based on the connection id, or better the transaction id from dm_tran_current_transaction would be better, and allow you to delete multiple rows.
Post #934534
Posted Wednesday, June 9, 2010 6:56 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Yesterday @ 10:52 AM
Points: 1,366, Visits: 1,729
simon.ellistonball (6/9/2010)
A nice idea, but your trigger relies on the fact that you would only ever delete one product at a time.


No, it doesn't. It does rely on having only one UserID and Reason for a given batch of deletes, and it does rely on context_info being XML, but the trigger as written should properly audit a deletion of multiple records in a batch if the previous conditions are met.
Post #934574
Posted Wednesday, June 9, 2010 7:13 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Thursday, July 22, 2010 8:59 AM
Points: 110, Visits: 952
I can appreciate a get-it-done-today workaround, but wouldn't a better approach be to rewrite the trigger/application layers to provide better tier isolation? I assumed a trigger should be encapsulated such that it can provide integrity validation of data based solely on RI rules; it should not need information provided by the user in the SP scope. Likewise the business logic probably should be enforced at the interface between data and the application (aka SP). Pushing this requirement all the way back to the UI, the application should not even call for a delete without proper authentication and "reason" coding.

I also wonder what kind of performance you can expect from the xml->CONTEXT_INFO->xml happening per transaction. Granted deletes may be infrequent but transactional overhead should be considered before it becomes a performance problem.

I guess I don't understand why the auditing is done in the trigger rather than directly in the SP. I have seen where the trigger provides a single place to hook actions on the data when multiple SP already exist that act on the table. This solution would still require visiting each SP to ensure the context_info is properly set.

Thanks for the article. I don't think I would have learned about the context_info myself.
Post #934584
« Prev Topic | Next Topic »

Add to briefcase 123»»»

Permissions Expand / Collapse