February 12, 2009 at 3:17 pm
I'm working with a vendor-supplied application, and I'm anticipating some foot-dragging when I bring this up.
February 13, 2009 at 2:45 am
It will result in more usage or TempDB. MAke sure that it's capable of handling additional load. Test on a non-production system first.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 13, 2009 at 7:06 am
It does affect read performance pretty heavily. As Gail said, test this first before pushing it into Production. And when you do, put as full a load on your test servers as you possibly can to make sure you catch all the issues.
February 13, 2009 at 7:48 am
Our database attaches to a NetApp device, so we should have ample spindles. I will deploy to test first, most definitely.
Are there any data quality issues anyone has experienced or vetted after enabling this?
February 13, 2009 at 8:07 am
Your biggest worry will be when two people try to change the same record & same field to two different things.
Say I have an auto insurance policy set as "pending acceptance" status. Mary Sue wants to change it to "rejected" status and Billy Bob wants to change it to "active" status. Because the updates don't process until after ALL the changes are in, whomever did the last update will "win" and the record will be changed to whichever. Unfortunately, no one will see this until after the record has finally been changed.
Of course, with Read Committed Snapshot off, you see the changes immediately because the record gets locked during the first update. The second update doesn't happen until the lock is released and the person updating second can see that the status was changed differently and inquire why.
It's mostly a matter of training more than it is data quality.
February 13, 2009 at 8:49 am
Brandie Tarvin (2/13/2009)
Say I have an auto insurance policy set as "pending acceptance" status. Mary Sue wants to change it to "rejected" status and Billy Bob wants to change it to "active" status. Because the updates don't process until after ALL the changes are in, whomever did the last update will "win" and the record will be changed to whichever. Unfortunately, no one will see this until after the record has finally been changed.
In cases like that, the second update will fail because it's referecing a row version that's too old. The error is an update conflict error. It's not something that can ever be thrown in the pessimistic isolation levels
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 13, 2009 at 11:08 am
Which means, what I originally thought is true.
Data Quality is not (or should not be) an issue.
February 13, 2009 at 11:15 am
Jeff - I am wondering why you are considering using Read Committed Snapshot. Any chance you could share your thoughts behind this?
Thanks in advance.
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
February 13, 2009 at 12:46 pm
I've been an Oracle DBA for a number of years, and row versioning is the default. I've been working with SQL DB's in addition to Oracle off and on for a few years now, and it seems like concurrency issues are pretty common (in my experience). When I read about the different locking models it seemed like a no-brainer. While I understand there are design issues often at the root of SQL apps when they don't scale, Read Committed Snapshot Isolation seems like a slam dunk.
So I'm looking for arguments for why I wouldn't want to push this in case I need to think more deeply about it.
I currently am supporting a DB a good deal of blocking, particularly at month-end. I've run profiler with the blocking report, set to 5 seconds and during month-end (a week), things are in the tank.
February 13, 2009 at 1:41 pm
I know that Read Committed Snapshot can alleviate some blocking and have used it in certain circumstances in our environment, mainly because we have an OLTP / Data Warehouse mix utilization of our database. So, clearly I am not saying not to use it but I do recommend that you consider the cost and test as you can see both performance enhancement and degradation. If you don't have solid baselines to look at I would start by gathering some basic information for review before and after.
Additionally, there are many times that we can see application improvement by looking further at the code and of course indexing strategies to ensure that the work being performed is executing quickly thereby alleviating contention.
Remember that MS doesn't have this as the default setting like Oracle and it is probably for a reason. 🙂 Don't know if you could get them to commit to that but....
A couple links for consideration as well -
Linchi Shea's blog on some performance tests he performed
SQL Server 2005 Row Versioning-Based Transaction Isolation - MS Published article
Just my thoughts...
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
February 14, 2009 at 4:00 am
jeff_wiegand (2/13/2009)
While I understand there are design issues often at the root of SQL apps when they don't scale, Read Committed Snapshot Isolation seems like a slam dunk.
It's not.
Many, many huge SQL apps work perfectly well under the pessimistic isolation. The time to use snapshot is when you have massive blocking problems that you cannot resolve with index tweaks and/or query tuning.
Before you consider changing the default isolation level, make sure that all your queries are as optimal as they can be and that your indexes support the workload. If there's still unacceptable blocking, then switch the isolation level
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 11 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply