First of all, I am terribly sorry that I am not allowed to paste any code from the project I'm working on here, but I'll ensure that I make my case clearly understood.
We have a central table, lets call this Hub, that has more than a billion records. Around this table, there are some ten Spoke tables each having a foreign key to the primary key in the Hub table. These Spoke tables can have anything from a few hundred to billions of records. The row size of the records in each table is approximately 100 bytes, with a composite primary key of eight or twelve bytes.
Now, what I am implementing is a routine which copies data from these tables to other tables for archival. Most commonly this is followed by a delete for the archived data. Deleting from all the Spoke tables is performing just fine, but deleting from the Hub table takes ages when we delete a substantial amount of records. I've identified that the delete is causing clustered index scan on the Spoke tables to ensure that referential integrity is maintained. I'm currently creating indexes on the columns used for foreign key to the Hub table to speed up deletion, but is there anything else I can do?
Unfortunately, statistics cannot be updated within a transaction, otherwise I would have tried to update the statistics on the Spoke tables to see if that improved the performance.
Ole Kristian Velstadbråten Bangås - Virinco
Concatenating Row Values in Transact-SQL[/url]