• Without all the details, what I'm about to say should be taken with a large grain of salt...

    I'd be inclined to do any updates directly against the table in question, even if you're doing a JOIN to other tables as part of that update. Your current approach is to duplicate the data, and then some. I presume, to a degree, you're denormalizing data into this temp table? So you're creating this massive hit on the system to store the existing table+, but, you're doing it one of the single most contentious locations in SQL Server, tempdb. Everything uses tempdb from sort operations to hash joins to versioning for snapshot isolation to... And you're fighting with all of that to move a very large table into tempdb. Then, you're going to leave that data in tempdb to do a bunch of udpates, additional writes, additional contention for resources, then, I presume again, you're using this big denormalized table to then write back to the original table? What are you saving in this operation? JOINs. It sure sounds like we're spending tons of resources to save something that SQL Server, as a relational database engine, is pretty good at.

    Yes, if you're updating hundreds of millions of rows, you're probably better off batching that process to reduce the amount of locking you're incurring on the table at any one time and to keep the log file size down. However, I'd do this work directly against the table in question and do the JOINs as needed. Assuming your system is relatively well put together, it should handle this fine. If it's not, that's where I'd focus my efforts, not on attempting to avoid JOINs.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning