• Talking more about situation three.

    First, just to be completely pedantic, step 4 in your example seems to have the table names reversed, so that instead of copying the latest data back to the Call_Log_Event table, you're copying data from the (empty) Call_Log_Event table to Call_Log_Event_Archive again. Of course, since no one else has mentioned this, maybe I'm badly misreading things.

    Second, in cases where there is a need to archive data, there is usually a need to do it more than once. Your process is a one-time solution to what will probably become a repeating problem.

    You could continue by inserting the current data from Call_Event_Log into Call_Event_Log_Archive as you've done in this instant. However, as the archive table grows larger and larger, I believe you will hit a point where searching for the records to be copied back, then removed will take much longer than simply deleting the records in the first place. It's possible that, with proper indexing, this wouldn't be significantly worse for some time, but I believe it would happen.

    Of course, if one of the concerns is how fast you can release Call_Event_Log for use, your method still might work well if the Call_Event_Log data was always written to a temp table instead of to the full Archive table. Restore the data from the temp table to Call_Event_Log first, commit, then write the data to be moved to Call_Event_Log_Archive, and drop the temp teble. This would almost certainly be slower than your solution overall (two writes of the archived data), but you can lock one table at a time, keeping things flowing as freely as possible. As you point out, the time when critical resources are locked can be far more important than the actual time it takes to complete an operation.


    R David Francis