• Delete performance is dependent on two factors: how quickly you can identify the rows to be deleted, and how quickly you can delete the row. Make sure you're testing this on a non-Production system.

    The first one is easy - just change the "DELETE FROM..." to SELECT * FROM...". If this returns pretty quickly, then your bottleneck won't be in the selecting phase.

    The deleting phase does a bit more work. Each row must be marked as deleted, and the previous version of the row must be written to the transaction log in case of a rollback. The problem with deleting many rows at once is the amount of locking that will occur during the delete. This may lock the entire table and block other users.

    Instead of deleting 100,000 rows at once, consider deleting smaller batches. Larger batches increase the chance of the entire table being locked, so a small batch of 5,000-10,000 rows might show much better performance. Transactions are shorter, so users are locked for smaller amounts of times, and the transaction log can be cleared (backed up in Full recovery, automatically truncated in Simple) after each batch.

    Note that delete triggers, or any foreign key relationships that need to be checked/cascaded could also cause performance issues if not well indexed.

    The most important thing is to test though. If you let it run for 4 hours, it may take another 4 hours to rollback!