• 2) Speaking of blocking, this type of operation REALLY needs to be done against an index that you get a SEEK plan on ALWAYS and NEVER escalates to a table lock (obviously unless you have a system with no concurrent activity).

    Both tables do contain appropriate indexes so that when the OPTIMIZE FOR clause is used SQL uses a MERGE JOIN.

    3) Another issue with TOP @var is you don't know when you need to stop and backup the transaction log. That is another thing to watch out for obviously - filling it up and/or causing it to grow uncontrolled.

    Yes, this is always a concern, but we have automatic LOG backups on a schedule are making sure that there will be sufficient space available for them to contain all the data that will get into them.

    4) I have done this type of operation at least dozens of times at clients over the years. I don't think I have ever used a driver table like you are with a join. I have used some form of key (like the identity here, date field, etc) to seek into my range and use a simple variable to drive the loop. Then I have a very simple WHERE clause such as :

    I do understand that driving a delete with a control table like this is somewhat odd, but the reason is that the criteria for determining which data should be deleted is both expensive to determine and not part of the PK, or in fact even in the tables that contain the data that I'm deleting. However, even though it is expensive to query, it is consistent across all the tables that need to have data removed, so I'm gathering it once and indexing it in the driver tables and then using those to control the actual deletes. If you have a better way of doing this, I would love to simplify it.