Costly update trigger -70,000,000 logical reads for 30,000 rows updated!

  • I have a trigger that appears to execute a very large number of logical reads; on one occasion it has hit 73,421,728 logical reads!

    There are two puzzling aspects about this:

    (1) Only about 30,000 rows are being updated in the base table (as can be seen in the exec plan)

    (2) The dominant operator (86% of total workload) is an index seek on an index that contains a seemingly unrelated column (a column that does not appear anywhere in the trigger body).

    I have captured the number of logical reads and the execution plan by polling the sys.dm_db_exec_requests DMV once a minute.

    How can such a huge number of logical reads of the trigger be reconciled to the relatively modest number of rows updated in the underlying table?

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

  • Here is the correct exec plan (attached)

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

  • Marios Philippopoulos (1/31/2009)


    I have a trigger that appears to execute a very large number of logical reads; on one occasion it has hit 73,421,728 logical reads!

    There are two puzzling aspects about this:

    (1) Only about 30,000 rows are being updated in the base table (as can be seen in the exec plan)

    (2) The dominant operator (86% of total workload) is an index seek on an index that contains a seemingly unrelated column (a column that does not appear anywhere in the trigger body).

    I have captured the number of logical reads and the execution plan by polling the sys.dm_db_exec_requests DMV once a minute.

    How can such a huge number of logical reads of the trigger be reconciled to the relatively modest number of rows updated in the underlying table?

    This sounds like an accidental cross join or triangular join between two tables... you need to double check and see how many rows are in each condtion that will satisfy the join. You may be missing a join column or simply have not anticipated a possible many to many relationship.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Marios Philippopoulos (1/31/2009)


    I have a trigger that appears to execute a very large number of logical reads; on one occasion it has hit 73,421,728 logical reads!

    Is this for one execution of the trigger? Or is it a total over a period of time?

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • RBarryYoung (1/31/2009)


    Marios Philippopoulos (1/31/2009)


    I have a trigger that appears to execute a very large number of logical reads; [font="Arial Black"]on one occasion [/font]it has hit 73,421,728 logical reads!

    Is this for one execution of the trigger? Or is it a total over a period of time?

    Heh... the force begins to flow in the other direction. 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (1/31/2009)


    RBarryYoung (1/31/2009)


    Marios Philippopoulos (1/31/2009)


    I have a trigger that appears to execute a very large number of logical reads; [font="Arial Black"]on one occasion [/font]it has hit 73,421,728 logical reads!

    Is this for one execution of the trigger? Or is it a total over a period of time?

    Heh... the force begins to flow in the other direction. 😉

    But, an "occasion" can be either an event or a point in time. It's a matter worth clarifying...

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • Ah... true enough.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • RBarryYoung (1/31/2009)


    Marios Philippopoulos (1/31/2009)


    I have a trigger that appears to execute a very large number of logical reads; on one occasion it has hit 73,421,728 logical reads!

    Is this for one execution of the trigger? Or is it a total over a period of time?

    It's a snapshot in time. Here is the query I use to capture the info:

    SELECT

    R.start_time

    ,R.status

    ,R.command

    ,R.database_id

    ,S.host_name

    ,S.program_name

    ,S.client_interface_name

    ,S.login_name

    ,R.wait_type

    ,R.wait_time

    ,R.last_wait_type

    ,R.wait_resource

    ,R.cpu_time

    ,R.total_elapsed_time

    ,R.reads

    ,R.writes

    ,R.logical_reads

    ,T.text

    ,P.query_plan

    FROM

    sys.dm_exec_requests R

    INNER JOIN

    sys.dm_exec_sessions S

    ON

    R.session_id = S.session_id

    OUTER APPLY

    sys.dm_exec_sql_text(R.sql_handle) AS T

    OUTER APPLY

    sys.dm_exec_query_plan(R.plan_handle) AS P

    WHERE

    R.session_id > 50;

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

  • Thanks guys for your responses.

    I should clarify that this is not a one-off event.

    Whenever the trigger is executed, the number of reads is always in the millions, while the number of rows updated in the base table is typically in the 30,000-row ballpark.

    Looking at the execution plan has not raised any flags (in my mind).

    I'm thinking, maybe the query I use to capture the info is not correct? (see my previous posting right above)

    Perhaps I should run a server-side trace zeroing in on the trigger execution to get an independent confirmation on the number of reads.

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

  • Can you perhaps post the trigger code?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (2/1/2009)


    Can you perhaps post the trigger code?

    Sure, here it is. I had posted this code in a previous posting (http://www.sqlservercentral.com/Forums/Topic630749-360-4.aspx), and you had correctly pointed out that some of the code at the start of the trigger body is not needed.

    Practically all of the work is done by the last UPDATE statement, as can also be seen by the exec plan (pls see my 2nd posting in this thread).

    My concern in the previous posting was on the effect the trigger had on tempdb. Here I'm focusing on the discrepancy between the huge number of reads and the modest number of rows updated in the base table.

    One of the responders in the previous post had this to say on the subject (highlighting my own), but this angle was not explored further:

    TheSQLGuru (1/8/2009)


    It is my belief that even if you rewrite the trigger to only do the audit update on rows where something is different between inserted and deleted the trigger will still have to hit 2+M rows in the base table to gather the comparison data thus 20M IOs and 3GB of tempdb space will still be used, even if it winds up that you are only truly modifying 1 row of actual data. That doesn't even make much sense to me at the moment since why affect 2M+ rows if you only modify 1?

    I guess the point in the above is that an update of even a small number of rows in a large table will still have a huge impact on the total I/O (and memory use) involved, if a trigger like the one here is used.

    If this is the case, why is this not reflected in the exec plan?

    CREATE TRIGGER

    [dbo].[TG_U_UpdTable]

    ON

    [dbo].[UpdTable]

    FOR

    UPDATE

    AS

    SET NOCOUNT ON

    DECLARE

    @ActionType char(1)

    SELECT

    @ActionType = CASE

    WHEN EXISTS( SELECT null FROM inserted ) THEN 'U'

    WHEN EXISTS( SELECT null FROM deleted ) THEN 'D'

    ELSE 'X'

    END

    IF @ActionType = 'X'

    BEGIN

    RETURN

    END

    UPDATE

    D

    SET

    D.[db_updateDate] = getdate()

    ,D.[db_updateBy] = system_user

    FROM

    [inserted] AS i

    INNER JOIN [dbo].[UpdTable] AS D ON i.[UpdTableOID] = D.[UpdTableOID]

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

  • Marios, I think that what you gav eus before was the estimated execution plan. Can you get us the actual execution plan? IT should also be avaialable through tracing/profiler.

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • Marios Philippopoulos (2/1/2009)


    RBarryYoung (1/31/2009)


    Marios Philippopoulos (1/31/2009)


    I have a trigger that appears to execute a very large number of logical reads; on one occasion it has hit 73,421,728 logical reads!

    Is this for one execution of the trigger? Or is it a total over a period of time?

    It's a snapshot in time. Here is the query I use to capture the info:

    SELECT

    R.start_time

    ,R.status

    ,R.command

    ,R.database_id

    ,S.host_name

    ,S.program_name

    ,S.client_interface_name

    ,S.login_name

    ,R.wait_type

    ,R.wait_time

    ,R.last_wait_type

    ,R.wait_resource

    ,R.cpu_time

    ,R.total_elapsed_time

    ,R.reads

    ,R.writes

    ,R.logical_reads

    ,T.text

    ,P.query_plan

    FROM

    sys.dm_exec_requests R

    INNER JOIN

    sys.dm_exec_sessions S

    ON

    R.session_id = S.session_id

    OUTER APPLY

    sys.dm_exec_sql_text(R.sql_handle) AS T

    OUTER APPLY

    sys.dm_exec_query_plan(R.plan_handle) AS P

    WHERE

    R.session_id > 50;

    I think this may be where the cross-joins are... what is the ACTUAL row count from the TRIGGER itself. Add a rowcount to the trigger and see because I don't see anything in the trigger that should cause this.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • RBarryYoung (2/1/2009)


    Marios, I think that what you gav eus before was the estimated execution plan. Can you get us the actual execution plan? IT should also be avaialable through tracing/profiler.

    I don't think it's the estimated exec plan, I obtained it from sys.dm_db_exec_requests. It was retrieved when the statement was actually executed...

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

  • Jeff Moden (2/1/2009)


    Marios Philippopoulos (2/1/2009)


    RBarryYoung (1/31/2009)


    Marios Philippopoulos (1/31/2009)


    I have a trigger that appears to execute a very large number of logical reads; on one occasion it has hit 73,421,728 logical reads!

    Is this for one execution of the trigger? Or is it a total over a period of time?

    It's a snapshot in time. Here is the query I use to capture the info:

    SELECT

    R.start_time

    ,R.status

    ,R.command

    ,R.database_id

    ,S.host_name

    ,S.program_name

    ,S.client_interface_name

    ,S.login_name

    ,R.wait_type

    ,R.wait_time

    ,R.last_wait_type

    ,R.wait_resource

    ,R.cpu_time

    ,R.total_elapsed_time

    ,R.reads

    ,R.writes

    ,R.logical_reads

    ,T.text

    ,P.query_plan

    FROM

    sys.dm_exec_requests R

    INNER JOIN

    sys.dm_exec_sessions S

    ON

    R.session_id = S.session_id

    OUTER APPLY

    sys.dm_exec_sql_text(R.sql_handle) AS T

    OUTER APPLY

    sys.dm_exec_query_plan(R.plan_handle) AS P

    WHERE

    R.session_id > 50;

    I think this may be where the cross-joins are... what is the ACTUAL row count from the TRIGGER itself. Add a rowcount to the trigger and see because I don't see anything in the trigger that should cause this.

    The SELECT query is for capturing the system info; has nothing to do with the trigger.

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

Viewing 15 posts - 1 through 15 (of 38 total)

You must be logged in to reply to this topic. Login to reply