You can do this with no chance of deadlocks nor even blocking AND the total "offline" time will be measured in milliseconds.
You're replacing the entire table each time. So... create two tables. For example, if your table was originally created as "SomeTable", drop that and create two tables... one called "SomeTableA" and another called "SomeTableB". Populate SomeTableA as you normally would and then create a SYNONYM called "SomeTable" (which was the name of the original table" and point it at the now populated "SomeTableA".
When you need to update the table with new data, populate "SomeTableB". You don't need a transaction for this. If you do it right, it can even be a minimally logged evolution. Once you've verified that "SomeTableB" has been correctly populated, drop the "SomeTable" synonym and immediately rebuild it but pointing to the most recent table, which is "SomeTableB". You can then truncate the now unused "SomeTableA". Total downtime will be measured in milliseconds.
When it comes time for the next update, simply reverse the process. Populate the currently empty "SomeTableA", repoint the synonym as previous described, then truncate "SomeTableB".
Wash, rinse, repeat as time wears on.
is pronounced "ree-bar
" and is a "Modenism
" for R
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
"Change is inevitable... change for the better is not".
"If "pre-optimization" is the root of all evil, then what does the resulting no optimization lead to?"
How to post code problems
How to Post Performance Problems
Create a Tally Function (fnTally)