July 7, 2015 at 8:56 am
Hi,
We have a high volume database with 1000's of users and 1000's of procs. Our application enforces a 20 second timeout on all connections.
We can't adjust the 20 seconds - this is a business rule.
It sometimes happens that a proc does not complete within 20 seconds and then times out halfway though. This causes data inconsistency where 50% of the code was saved to the DB and 50% was not - seeing that a stored proc is not transactional and therefor does not roll back the code.
We can't put the code in a TRANSACTION in order to roll back when a time out occurs, because this causes exclusive locks on the tables.
So I guess my question is:
Is it possible to undo/rollback all the code in a proc when a timeout occurs - without using a TRANSACTION?
And if a TRANSACTION is the only way - how do I avoid the exclusive lock and blocks?
Thanks,
July 7, 2015 at 1:43 pm
Casper101 (7/7/2015)
Hi,We have a high volume database with 1000's of users and 1000's of procs. Our application enforces a 20 second timeout on all connections.
We can't adjust the 20 seconds - this is a business rule.
It sometimes happens that a proc does not complete within 20 seconds and then times out halfway though. This causes data inconsistency where 50% of the code was saved to the DB and 50% was not - seeing that a stored proc is not transactional and therefor does not roll back the code.
We can't put the code in a TRANSACTION in order to roll back when a time out occurs, because this causes exclusive locks on the tables.
So I guess my question is:
Is it possible to undo/rollback all the code in a proc when a timeout occurs - without using a TRANSACTION?
And if a TRANSACTION is the only way - how do I avoid the exclusive lock and blocks?
Thanks,
I don't think there's a way to get a rollback to occur without a transaction, but I'm not the expert on that. However, it seems to me that the more important problem is being ignored - that of overall system capacity. I'd first be looking at the "heavy hitters" among the procs that time out, and seeing what might be causing the run-time to be so high. If the queries are as good as they can be, then overall system capacity may be the issue. What kind of hardware is the server running on? Are the disks local or SAN-based? If there's a SAN, what do the I/O statistics look like? What kind of waits are you seeing? Is there blocking going on? Sometimes, a common thread appears after answering these kinds of questions. The solution could be as simple as a decent RAM upgrade, or it might be more complex, and require a combination of query tuning, additional RAM, table partitioning, or other performance improvement techniques. Maybe the server is just plain "out of gas", so to speak, and it's time for a bigger box. As your question only addresses a small part of the problem, without providing any other context, the answer is "it depends". Care to elaborate?
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
July 7, 2015 at 2:06 pm
This causes data inconsistency where 50% of the code was saved to the DB and 50% was not - seeing that a stored proc is not transactional and therefor does not roll back the code.
This is somewhat confusing.
For starters, CODE does not get saved to the DB, DATA does.
Secondly, SQL Server does not work that way.
If a set of code is updating, as an example, 100 rows and it takes longer than the 20 seconds to do the update, none of the rows would be updated. They would all be rolled back.
That being said, is this code doing a loop and updating one row at a time? Is it in a cursor? If that's the case, then there is likely a more efficient way to write this code.
Can you post the actual code?
What steps have you taken to tune the server?
What metrics have you captured about this server/database?
Where did the 20 second rule come from? This seems like an arbitrary number that was pulled out of a hat. 20 seconds is not a very long time to have to wait for a large complex result set. Have you ran one of the offending procs in SSMS to get the actual time it does take to run? The 20 seconds would be very silly if the proc completed successfully in 30 seconds.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
July 7, 2015 at 3:22 pm
All SQL code is transactional. If you aren't specifically declaring transactions, you are most likely using autocommit where every statement is its own transaction. The ROLLBACK will only roll back the last transaction. (I'm ignoring nested transactions.) If you are using autocommit, the last transaction is the last statement, so the rollback will only roll back the last statement, because that is the scope of the last transaction. If you want to roll back multiple statements, you CANNOT use autocommit.
https://msdn.microsoft.com/en-us/library/ms174377.aspx
Drew
J. Drew Allen
Business Intelligence Analyst
Philadelphia, PA
July 8, 2015 at 1:44 am
Casper101 (7/7/2015)
Is it possible to undo/rollback all the code in a proc when a timeout occurs - without using a TRANSACTION?
No. That's what transactions are there for
And if a TRANSACTION is the only way - how do I avoid the exclusive lock and blocks?
You don't. Locks aren't bugs to be avoided. If you have efficient code, locks should be held for short times. If you have blocking problems, start by tuning your queries. If you still have blocking, consider read_committed_snapshot isolation level.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 5 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply