• Indianrock (2/14/2016)


    Thanks Hugo, I will thoroughly review your post.

    I'll experiment with the number of rows gathered into the temp table up front -- this is only done once per day as the job runs once per day. Or in the case of this Address table where there are currently only about 4 million available for deletion ( meeting purge criteria ) I might try gathering all of them into the temp table on a weekend and then run the while loop. Also considering having the while loop deletions purge 100 at a time as long as that value is low enough to not cause blocking.

    Current simplified purge logic:

    Get x number of "id" values from target table into a temp table

    In a while loop, delete one at a time of the ID values in the temp table

    Note that the gathering of rows to be deleted into a temp table (assuming default isolation level) does not cause blocking. Only the deletion does. I think you are aware of this, but just wanted to make sure.

    EDIT: Also, deleting one at a time is a bit too extreme, in my opinion. I think that deleting 5,000 or 10,000 or so at a time whould be just fine.

    Do check the estimated execution plan of a delete first, to verify that all foreign key constraint checks can be done using index seek. If index or table scans are needed, it will be slow and cause blocking (regardless of the number of rows deleted).


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/