Ok. That's a bit different than what you originally posted and I somewhat agree with the premise (it's called "Divide'n'Conquer"), although I question why even with a 100 million rows, why it would be so arduous. Something is wrong there and I'd seriously be tempted to figure out what's wrong and fix it instead of doing a workaround like this.
That apparently notwithstanding, it also looks like this is a once a day thing and there will be only one run doing this at a time and so we don't have to worry about contention between two or more simultaneous runs.
Also understand that I don't know how many rows you end up with for either #temp1 or #temp2 nor what the total number of bytes for either is nor what condition your TempDB database is in for usage.
As your code is written in you last post above, #temp1 already only has those ID's that meet the criteria of the WHERE clause and so you shouldn't actually need a WHERE clause on the INSERT into #temp2. It's just an added complication for the optimizer and will burn more CPU unnecessarily.
I would, however, make sure that #temp1 had a Unique Clustered Index on the ID column provided, of course, that you the ID's are actually unique. If they aren't, then you're going to have some problems with duplication and will also still need the WHERE clause I just told you you could get rid of.
Since both of these tables reside in TempDB, which is in the SIMPLE Recovery Model, the use of WITH (TABLOCK), along with the "order" provided by the indexes on the ID columns, will allow you to achieve "Minimal Logging" on your inserts, which is twice as fast as a normal insert. If the Clustered Indexes on the temp tables is already present, that is sometimes faster than inserting into a temp table and then adding the Clustered Index. "It Depends" so you'll need to test both methods.
If you have a LOT of data (forget the row count... I'm talking byte count), you might overwhelm TempDB during the run. What you could do is make a "scratch" database in the SIMPLE Recovery Model and use that instead of TempDB with the understanding that everything in the "scratch" database is only temporary and that the database should never be backed up. You should also be diligent in truncating and dropping any such work tables you place in the "scratch" database.
In your code above, you'll need to create the "b" alias on #temp2.
So, to summarize, the code you have above looks like it'll work. If it overwhelms TempDB (as in "contention" or causes it to grow too much), you could create a "scratch" database that uses the SIMPLE Recovery Model. That will be a little bit slower than TempDB because TempDB starts out in memory and only spills to disk if it gets something too big. I don't know what the threshold for that is.
I'd also take a look at why you need to do this work around. There's usually something wrong when you have to resort to such trickery. For example, I'm thinking that a filtered index on the big tables might work a treat. I'd also look to see if the big tables have any in-row LOBs or some silly long variable width that aren't of the MAX datatype and haven't been bumped to be out of row... that slows down just about everything and can also produce a very low average page density (which uses a LOT of extra memory) that cannot be improved with index maintenance because of "short trapped pages" that can be as little as 1% full and will never get any fuller.