• Our routines which delete large amounts of data do the following:

    For all records to be delete get the Clustered Index keys into #TEMP table, ordered by clustered index and containing an IDENTITY column.

    (We find the query is often complex - e.g. date range + OK-to-delete-flag, and just deleting batches with that in the WHERE means that the Query Time, before the delete, on each iteration becomes a limiting factor on the operation, perhaps made worse by the index fragmenting during the delete??)

    Delete in batches JOINing the #TEMP table to the actual table using a range of IDENTITY values (incremented each loop iteration by the delete batch size)

    Use WAITFOR to delay a couple of seconds to allow other processes to run

    LOOP

    On critical processes we also measure the elapsed time for the DELETE and if it exceeds a MAX threshold we reduce the batch-size by half. If the elapsed time is quick then we increase the batch size by 20%

    For databases in Full Recovery Mode we set Log Backups to run every minute for the duration of the delete to prevent the delete operation being the cause of log file extensions.