I've thought of creating a view that points to the table. Then, you can populate another table with the new information and then update the view to point to the new table. Once all the queries are done running on the old dataset table, the old table can then be deleted. I've never tried this so I'm curious what the experts on this forum have to say about this. Does the view get locked if a query is running it? Any negative sides to my proposed solution?
In this case, the view and the underlying tables will end up with a Sch-S lock. This will negate any chance you have of either altering the view or performing a table rename while any component is using it.
To completely avoid dirty data and locking issues in a case like this, you'll have to do the equivalent of a drainstop, which you can't do in SQL Server directly. This is one of the few cases where I allow for limited Dynamic SQL, because you'll have to actually alter the call to the database, and leave a hanging transaction to 'deactivate' the older version once all the locks have cleared.
Hugo is correct, the best solution in cases like this is Snapshot isolation. It's just not always feasible, particularly if you're limited in TempDB. This is one of those 'events' that bit me in the arse once or twice along the way and when I wrote up last week's truncation question this seemed like a good one as well, and an easy way to present it.
However, to go back to the root, in most scenarios I come upon this I look into an alternate solution entirely. Flushing a table once an hour is usually because you're reloading off an external feed. Say, current stock ticker prices. There are alternate ways to store this data without stepping on yourself. On of my preferred methods is to use a version key on the new data while allowing the old to continue to exist, and a settings table to tell all views and queries which data is most current.
So, you'll import new data into the same structure, assign it a key, and when that's complete you'll update the record in the settings table that indicates to all queries (via WHERE or ON clause) which set of data to use. At the final part of the import you'll clear any data not in the last two versions (or store it, or whatever), so you don't end up with a monster archive table in your active areas.
However, you don't always have time to completely rebuild an existing import process when you're looking for 'quick hits' to help optimize the system. Swapping a DELETE FROM with no WHERE clause to a TRUNCATE TABLE clause seems an easy and effective way to reduce a good chunk of logfile load. I find it's good to know the pitfalls of easy fixes. ;-)
- Craig Farrell
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake. For better assistance in answering your questions | Forum Netiquette
For index/tuning help, follow these directions. |Tally TablesTwitter: @AnyWayDBA