Is using triggers the only way of auditing updates of table rows?

  • TheSQLGuru (1/8/2009)


    It is my belief that even if you rewrite the trigger to only do the audit update on rows where something is different between inserted and deleted the trigger will still have to hit 2+M rows in the base table to gather the comparison data thus 20M IOs and 3GB of tempdb space will still be used, even if it winds up that you are only truly modifying 1 row of actual data. That doesn't even make much sense to me at the moment since why affect 2M+ rows if you only modify 1?

    Wouldn't updating fewer rows in the base table make for a less costly update statement?

    ;with ChangedRows as

    (select *

    from inserted

    except

    select *

    from deleted)

    update MyTable

    set ... -- Audit columns here

    from dbo.MyTable

    inner join ChangedRows

    on MyTable.ID = ChangedRows.ID

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

  • I think the point is that if it is known that there will be a significant number of records updated with no change, why update them in the first place? It would be more efficient to check before updating than to update and then check.

  • Marios Philippopoulos (1/8/2009)


    TheSQLGuru (1/8/2009)


    It is my belief that even if you rewrite the trigger to only do the audit update on rows where something is different between inserted and deleted the trigger will still have to hit 2+M rows in the base table to gather the comparison data thus 20M IOs and 3GB of tempdb space will still be used, even if it winds up that you are only truly modifying 1 row of actual data. That doesn't even make much sense to me at the moment since why affect 2M+ rows if you only modify 1?

    Wouldn't updating fewer rows in the base table make for a less costly update statement?

    ;with ChangedRows as

    (select *

    from inserted

    except

    select *

    from deleted)

    update MyTable

    set ... -- Audit columns here

    from dbo.MyTable

    inner join ChangedRows

    on MyTable.ID = ChangedRows.ID

    The optimizer will have absolutely no way to know how many rows is going to come out of that CTE. My guess is that it will estimate sufficient rows to cause a table scan on mytable if there are actually 2M rows affected by the update. Thus full lock on mytable and row versioning, etc.

    Oh, and the inserted/deleted tables are created in tempdb regardless of how many rows you update back into mytable. They are part of the trigger overhead in 2005 (prior versions of sql server used to read the tlog to instantiate inserted/deleted). so you could wind up with a TRIPLE hit on tempdb: inserted table, deleted table AND mytable row versioning assuming that subsystem is in play as well.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru (1/8/2009)


    Marios Philippopoulos (1/8/2009)


    TheSQLGuru (1/8/2009)


    It is my belief that even if you rewrite the trigger to only do the audit update on rows where something is different between inserted and deleted the trigger will still have to hit 2+M rows in the base table to gather the comparison data thus 20M IOs and 3GB of tempdb space will still be used, even if it winds up that you are only truly modifying 1 row of actual data. That doesn't even make much sense to me at the moment since why affect 2M+ rows if you only modify 1?

    Wouldn't updating fewer rows in the base table make for a less costly update statement?

    ;with ChangedRows as

    (select *

    from inserted

    except

    select *

    from deleted)

    update MyTable

    set ... -- Audit columns here

    from dbo.MyTable

    inner join ChangedRows

    on MyTable.ID = ChangedRows.ID

    The optimizer will have absolutely no way to know how many rows is going to come out of that CTE. My guess is that it will estimate sufficient rows to cause a table scan on mytable if there are actually 2M rows affected by the update. Thus full lock on mytable and row versioning, etc.

    Oh, and the inserted/deleted tables are created in tempdb regardless of how many rows you update back into mytable. They are part of the trigger overhead in 2005 (prior versions of sql server used to read the tlog to instantiate inserted/deleted). so you could wind up with a TRIPLE hit on tempdb: inserted table, deleted table AND mytable row versioning assuming that subsystem is in play as well.

    Hmm, good points, thanks!

    I will try to do some testing when I get the chance, will certainly post here if I have any news.

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

  • matt stockham (1/8/2009)


    I think the point is that if it is known that there will be a significant number of records updated with no change, why update them in the first place? It would be more efficient to check before updating than to update and then check.

    That's true.

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

Viewing 5 posts - 31 through 34 (of 34 total)

You must be logged in to reply to this topic. Login to reply