• The easiest way seems to be looping based on an identity key. I've broken up the transaction into groups of 50,000 rows, and a CHECKPOINT is issued manually (just to be sure) after each load. The first part of the transaction seems to take up the most log space, at slightly over 1G. The remaining data groups seem to take only a few hundred megs of log space. It's certainly a great deal better than 9G. Thank you all for your help.

    An interesting thing I've learned from this - the dramatic effect of file resizes on query perforance. Typically, the stored procedure runs in 6-9 minutes (when there is enough log space to accomidate it). When I was forcing it to resize the log space (1G at a time), it took 16-20 minutes, and that's just 8 resizes. Imagine if my resize were set to 10%!! I could be waiting an hour for that to complete. File resizes truly do kill a query. You will notice the difference.