Capturing Data Changes

  • Comments posted to this topic are about the item Capturing Data Changes

  • This was removed by the editor as SPAM

  • I prefer to use triggers.  They do have to be written carefully, but if done properly capture changes well.  Nulls are trickier to handle when updating data as they are automatically present with inserts and deletes and it can be difficult or impossible to distinguish what is going on.

  • I too am a fan of triggers (only for auditing).  A long time ago I wrote an article to automate the creation of audit tables and audit triggers.

    https://www.sqlservercentral.com/articles/an-auditing-solution-with-xml-and-xsl

    A second "fun proof of concept" article uses a trigger to generate "rollback and roll forward" code for each data modification.

    https://www.sqlservercentral.com/articles/imaginative-auditing-with-rollback-undo-and-rollforward-redo-part-i-1

    These articles are both maybe 10 years old, so possibly a little outdated.

  • I don't have to implement data change capture very often, but when I do I put in right in the Proc making the update, either with the OUTPUT clause or simply a second INSERT statement into an audit table.  Granted if someone changes data outside the Proc, those changes will be lost, but the chances of that can be reduced with proper permissions.

    I loathe triggers and will only use them if there is absolutely no other way to accomplish the task. We currently struggle with performance issues caused by triggers added by my non-DBA predecessors.

    Be still, and know that I am God - Psalm 46:10

  • I recently added audit triggers that had some degree of complexity.  To test the performance one of the developers issued a million inserts in a little over an hour--far beyond what would be the expectation in production.  Everything worked as expected.  I've heard about performance issues with triggers, but I haven't seen them for audit triggers.  I have seen them for triggers that do work that should be done by the application.

  • Where I am we tend to use OUTPUT INTO clauses to create rollback tables and preserve data.

  • RonKyle wrote:

    I recently added audit triggers that had some degree of complexity.  To test the performance one of the developers issued a million inserts in a little over an hour--far beyond what would be the expectation in production.  Everything worked as expected.  I've heard about performance issues with triggers, but I haven't seen them for audit triggers.  I have seen them for triggers that do work that should be done by the application.

    A million inserts.  Was that indicative of your actual workload?  Was it a million inserts done sequentially, or all at once?  Inserting row after row may not be a valid test.

    You are comfortable with triggers, and they may certainly work very well for your environment.  But, they may not be the right tool for the job.

    We have to capture data changes on a variety of systems.  Triggers would fail miserably on a few of them.  We've covered our needs with CDC, temporal tables, triggers, output clauses, and so forth.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • david.gugg wrote:

    I don't have to implement data change capture very often, but when I do I put in right in the Proc making the update, either with the OUTPUT clause or simply a second INSERT statement into an audit table.  Granted if someone changes data outside the Proc, those changes will be lost, but the chances of that can be reduced with proper permissions.

    I loathe triggers and will only use them if there is absolutely no other way to accomplish the task. We currently struggle with performance issues caused by triggers added by my non-DBA predecessors.

    This is my view.

  • I've used INSERT.. DELETE.. OUTPUT.. in the past to implement archival of log tables. A daily scheduled job simply deletes where datetime < X days, and the deleted rows are inserted into another table. That I like about this is that the DELETE and INSERT are contained in one atomic transaction.

     

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Our application uses a few different databases and the auditing is accomplished by the application layer first creating a distributed transaction, then performing the update in one database, then performing a diff of the object, and then writing the audit data to another database. Lastly, committing the distributed transaction.

  • I have used this approach to insert lists of records that were deleted or updated and it is also a good way to write scripts that insert into multiple tables at once where the primary ID value is generated and otherwise not known but some other reference is then it can be inserted into a temp table and joined to the next insert to provide the ID value.

  • Robin Wilson wrote:

    I have used this approach to insert lists of records that were deleted or updated and it is also a good way to write scripts that insert into multiple tables at once where the primary ID value is generated and otherwise not known but some other reference is then it can be inserted into a temp table and joined to the next insert to provide the ID value.

    .    .     .     .    .

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply