The only non-clustered indexes are on the tables used as input (select). The tables that have rows updated or inserted have one key field as an index. The table which will have the 1 million rows inserted has an Identity on its key field. If the key field is being incremented by 1 for each insert, I did not think there would be any page splits.
DTS is not an option.
A rewrite to C is not an option. This program is a one time run.
I have not been executing any commits. Would this help? There are no Triggers.
I have to use ADO Recordsets as Cursors to sequentially process the input tables. The Cursors open quickly, but like I mentioned the processing slows down as it goes along.
I don't think the database is pre-sized. We have a very small shop here, (2) Engineers, and no DBA.
Clavin, how can I "throttle" a Stored Procedure?
On another note, my last test ran for 28 hours. There were two changes in this test. One: I put the tables the script uses for input in another database on a different disk controller, and two: I executed the Stored Procedure (sp_truncate_log) every 500 inserts.
When this script had run for about an hour, it had inserted 100,000 rows. In the end it was 28 hours for 1,100,000 rows. I was hoping for 11 hours.
Thanks for all of the suggestions and feedback.
I'm going to execute a Select every 50,000 inserts, change the recovery mode to "simple."
Would it help to create the database with larger allocations? It's a new DB for a new application, so I have this option. Right now the Data file is 208mb used and 218mb free, the Log file is 796mb used and 367mb free.