This is a wonderful article I wish to see for a long time. I'd like to see the coming series covering the following issues:
error handling on severity level 17,18 and 19. especially handling constraints like duplicate record violation.
transaction handling xp_cmdshell like bcp...
there are too many issues to be considered in sqlserver2000, seems sqlserver2005 has a good exception handler by try... catch...
locking is another weakness of sqlserver, no question Oracle is doing much better in isolation level control, locking and error handling.
Thank you. The next article (scheduled for publication on Nov 24th will deal pretty extensively with error handling. The third (submitted but no publish date yet) deals with Locks and Isolation levels.
I have to disagree with you about Oracle's superiority though. Having worked extensively with Oracle, I would say that there is very little advantage with its locking and transaction isolation support. It's true that in Oracle readers don't block writers and vise versa, but that locking strategy comes at a cost. Both SQL Server and DB2 use very similar locking strategies to great success. The key is to know the DBMS you use, understand its strengths and weaknesses and know how to get the most out of it.
Oracle's error handling is more consistent and easier to use, but from what I've seen, SQL Server 2005 will change that.
When I mention Oracle's advantage, I actually want to say that select statement in TSQL put a lock on row/page/table, and potentially cause more deadlock than Oracle, because pl/sql select do not put lock unless using "select ... for update".
anyway, I look forward to see your next article, it's really inspiring and I never see one before talking about the whole thing.
thank you very much.
By the way, when you mentioned that "locking strategy comes at a cost", did you mean Oracle rollback segment? I'd like to see your comments on the tradeoff of Oracle's less locking and saving before-image in rollback segment. I think Oracle still lead some top TPCC records.
That is exactly what I meant, but I did not mean to imply that the cost was entirely paid in performance or that it couldn't be mitigated by other factors. The cost of using rollback segments includes additional disk space and memory used, which impact performance. There tends to be a minimal overall performance hit because of reduced blocking so that tends to be a wash. Yes, Oracle does well on the TPC benchmarks (as do DB2 and SQL Server) but the biggest "cost" I see with the Oracle approach is management. Our Oracle DBA's spent considerable time managing and monitoring the rollback segments. Our SQL Server DBA's spent much less time dealing with blocking problems.
Nice article. Hope to see next in series soon.
I know this is probably a bit off topic, but I noticed that you didn't mention COM+ transactions. I think there's quite a lot of developers out there using COM+ and it would really be helpful if you will have a part on how transactions work with COM+, especially the tricky parts such as:
1. Behaviour when COM+ transactional object that calls non COM+ object and vice versa.
2. How to execute statements in a COM+ transactional object that's not enlisted in the COM+ transaction (for example when error/audit logging, it just wouldn't do to have it being rolled back right)?
I'm not a programmer and don't pretend to be one so I won't be the one to effectively address your COM+ transaction questions. I could address the issue at a very general level, but I am not familiar with all the details of how COM+ or .Net handle transactions.
Generally speaking though, I think it is best to control transactions at the DBMS rather than in the application, that's one of the major functions of the DBMS and ultimately that is where the rubber meets the road so to speak when it comes to transactions. In my experience, controlling transactions at the application layer results in more network traffic and less reliable results.