February 13, 2012 at 5:04 am
Nice one, thank you
Iulian
February 13, 2012 at 5:56 am
cengland0 (2/13/2012)
I've thought of creating a view that points to the table. Then, you can populate another table with the new information and then update the view to point to the new table. Once all the queries are done running on the old dataset table, the old table can then be deleted. I've never tried this so I'm curious what the experts on this forum have to say about this. Does the view get locked if a query is running it? Any negative sides to my proposed solution?
In this case, the view and the underlying tables will end up with a Sch-S lock. This will negate any chance you have of either altering the view or performing a table rename while any component is using it.
To completely avoid dirty data and locking issues in a case like this, you'll have to do the equivalent of a drainstop, which you can't do in SQL Server directly. This is one of the few cases where I allow for limited Dynamic SQL, because you'll have to actually alter the call to the database, and leave a hanging transaction to 'deactivate' the older version once all the locks have cleared.
Hugo is correct, the best solution in cases like this is Snapshot isolation. It's just not always feasible, particularly if you're limited in TempDB. This is one of those 'events' that bit me in the arse once or twice along the way and when I wrote up last week's truncation question this seemed like a good one as well, and an easy way to present it.
However, to go back to the root, in most scenarios I come upon this I look into an alternate solution entirely. Flushing a table once an hour is usually because you're reloading off an external feed. Say, current stock ticker prices. There are alternate ways to store this data without stepping on yourself. On of my preferred methods is to use a version key on the new data while allowing the old to continue to exist, and a settings table to tell all views and queries which data is most current.
So, you'll import new data into the same structure, assign it a key, and when that's complete you'll update the record in the settings table that indicates to all queries (via WHERE or ON clause) which set of data to use. At the final part of the import you'll clear any data not in the last two versions (or store it, or whatever), so you don't end up with a monster archive table in your active areas.
However, you don't always have time to completely rebuild an existing import process when you're looking for 'quick hits' to help optimize the system. Swapping a DELETE FROM with no WHERE clause to a TRUNCATE TABLE clause seems an easy and effective way to reduce a good chunk of logfile load. I find it's good to know the pitfalls of easy fixes. 😉
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
February 13, 2012 at 6:06 am
I like the question.
I don't like the confilicting documentation.
TRUNCATE TABLE always locks the table and page but not each row.
http://msdn.microsoft.com/en-us/library/ms177570.aspx
Same article also mentions that this is a DDL command, so I guess the Schema M lock is infered like most ddl commands.
February 13, 2012 at 6:28 am
Hugo Kornelis (2/13/2012)
I didn't check it, but I fully expect an ALTER VIEW statement to require an exclusive schema lock.
This is true, but it's not really a problem - the ALTER VIEW statement will eventually get the lock it needs, complete very quickly, and then release that lock. If you don't mind how long the ALTER VIEW blocks for before succeeding, this can work well (though a deadlock might occur).
For the situation depicted by Craig, the best solution would probably be to use one of the SNAPSHOT isolation levels.
RCSI doesn't help here since it provides versioning at the statement level. An RCSI query that starts after the modification starts will block waiting to acquire Sch-S. Full snapshot isolation doesn't work either: it will also block waiting to acquire Sch-S, and if the DDL change succeeds, the query will then fail with error 3961: "Snapshot isolation transaction failed in database {name} because the object accessed by the statement has been modified by a DDL statement in another concurrent transaction since the start of this transaction. It is disallowed because the metadata is not versioned. A concurrent update to metadata can lead to inconsistency if mixed with snapshot isolation."
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
February 13, 2012 at 6:34 am
cengland0 (2/13/2012)
What other ways can you change the entire contents of a table and still have the reports available without dirty data?
Aside from the VIEW suggestion, similar alternatives include using a SYNONYM, using ALTER TABLE SWITCH (which does not require the table to be explicitly partitioned), a local partitioned view (where only one of the underlying tables is TRUNCATEd) or sp_rename. As Craig mentioned, yet another alternative is to use explicit keys to perform your own 'versioning'.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
February 13, 2012 at 6:38 am
SQL Kiwi (2/13/2012)
Hugo Kornelis (2/13/2012)
I didn't check it, but I fully expect an ALTER VIEW statement to require an exclusive schema lock.This is true, but it's not really a problem - the ALTER VIEW statement will eventually get the lock it needs, complete very quickly, and then release that lock. If you don't mind how long the ALTER VIEW blocks for before succeeding, this can work well (though a deadlock might occur).
Okay, wait a second. If you have someone running a large query on your view, then you submit the ALTER VIEW statement, it will wait until the large query is over before it runs, right? So, what about the queries that came after the ALTER VIEW? Wouldn't those be in queue waiting for the ALTER VIEW to complete but the ALTER VIEW hasn't started yet because it's still waiting for the select statement to complete. So, even though it's quick, due to the way the queue works, you could still have several queries waiting to execute.
February 13, 2012 at 6:45 am
SQL Kiwi (2/13/2012)
cengland0 (2/13/2012)
What other ways can you change the entire contents of a table and still have the reports available without dirty data?Aside from the VIEW suggestion, similar alternatives include using a SYNONYM, using ALTER TABLE SWITCH (which does not require the table to be explicitly partitioned), a local partitioned view (where only one of the underlying tables is TRUNCATEd) or sp_rename. As Craig mentioned, yet another alternative is to use explicit keys to perform your own 'versioning'.
I haven't used the SWITCH in an ALTER TABLE before but according to BOL: If reassigning one partition's data to form a single table, the target table must already be created and it must be empty.
That seems you will still have the same problems.
Regarding SYNONYM, that was an excellent suggestion. Much better than creating a view. However, doesn't that still put a lock on the table during the process and also wait for all previously queued SQL commands to finish before executing?
February 13, 2012 at 7:03 am
Really good question! thanks...
February 13, 2012 at 7:05 am
Really interesting question. Learned something today. Thanks.
February 13, 2012 at 7:20 am
cengland0 (2/13/2012)
I haven't used the SWITCH in an ALTER TABLE before but according to BOL: If reassigning one partition's data to form a single table, the target table must already be created and it must be empty. That seems you will still have the same problems.
The idea would be to load the new data into a separate table, then when everything is ready to flip over, truncate the existing table and SWITCH the new data in as one step. Most of the methods require obtaining Sch-M at some stage; the point is to make the changeover almost instantaneous when everything is ready to go. I'll respond to your question about the locks separately.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
February 13, 2012 at 7:26 am
cengland0 (2/13/2012)
Okay, wait a second. If you have someone running a large query on your view, then you submit the ALTER VIEW statement, it will wait until the large query is over before it runs, right?
Right. The ALTER VIEW (or most of the alternative strategies) have to acquire Sch-M sometime, the point is to make the time the Sch-M is held as short as possible.
So, what about the queries that came after the ALTER VIEW? Wouldn't those be in queue waiting for the ALTER VIEW to complete but the ALTER VIEW hasn't started yet because it's still waiting for the select statement to complete. So, even though it's quick, due to the way the queue works, you could still have several queries waiting to execute.
Locking used to work a bit like this (a strict FIFO model) back in SQL Server 2000. Later versions have a much more relaxed form of FIFO where this sort of blocking does not generally occur. New queries that are not incompatible with existing *held* locks (not just requested ones) will get to run without being blocked behind the process waiting for Sch-M. The process needing Sch-M will get in eventually, and so long as this lock is not held for long, the users probably wouldn't even notice.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
February 13, 2012 at 7:39 am
Great question and excellent scenario setup. I am quite surprised that at the time I am posting there are 417 answers and 16% of them still think truncate is not logged. :w00t:
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
February 13, 2012 at 8:32 am
Nice one
February 13, 2012 at 9:00 am
Nice straightforward question with an excellent explanation. Thanks for submitting.
http://brittcluff.blogspot.com/
February 13, 2012 at 9:06 am
great question and discussion today - tks
Viewing 15 posts - 16 through 30 (of 53 total)
You must be logged in to reply to this topic. Login to reply