SQLServerCentral Editorial

Guest Editorial: Short Transactions

,

Transactions should be kept as short as possible. I read this everywhere and it is almost always presented as an incontrovertible fact, with no need of further justification. But what is it really saying? Of course, transactions should include only those statements that need to succeed or fail together, as a unit, but they certainly shouldn't contain any fewer statements than that. If it's a business requirement that the address field is updated when the name field changes, then those two actions should succeed or fail together. They shouldn't be separated out into two shorter transactions, unless the business cares nothing about the consequences of one updating without the other.

The idea of short transactions is pretty much "built in" to SQL Server. By default every single statement is a transaction and is "auto-committed" unless you explicitly issue begin and end transaction commands. It is ingrained in many developers to avoid "long" transactions because they know it will require costly locking resources, and could decrease concurrency. They may even go so far as to allow "dirty reads" in order to further mitigate the need to acquire locks.

I understand why this "keep 'em short" mindset exists. After all, we have all seen the unwieldy stored procedures that start with "begin transaction", regardless of whether the statements within really do form an atomic unit of work. However, I worry that it points to a more deep-seated lack of confidence in the scalability of SQL Server's locking and concurrency models. It seems to suggest that unless we give it every bit of help we can - even to the point of endangering the integrity of our own data - it won't scale.

In Oracle, berated and bemoaned as it is, this obsession does not exist. A transaction implicitly starts when you issue the first SQL statement and doesn't end till you explicitly issue a commit, whether it is two statements later or two hundred. It does not rely on a 'shared locking' model to enforce basic "read committed" isolation, so transactions really can be as long as they need to be. Instead, Oracle implements a multi-versioning model that guarantees a consistent view of your data at the point your transaction started, using rollback (or undo) segments to reconstruct your data as it appeared at the start of the transaction. It is not possible to do a dirty read in Oracle, even if you wanted to.

SQL Server has ambitions to match this ability with its new snapshot isolation level. When enabled, SQL Server will maintain in tempdb a time-stamped "version store" of all data changes. Instead of blocking when it encounters an exclusive lock, a reader transaction will simply "read around" the lock, retrieving the version of the row consistent with the time the transaction started.

Of course, there are issues to deal with here, both in terms of the extra burden snapshot isolation places on tempdb, and the fact that swapping to this mode may cause some issues with existing applications that weren't designed with it in mind. However, as these issues are planned for and addressed, I envisage snapshot isolation becoming the norm.

When this happens, hopefully this "keep 'em short" obsession will fade into history. In the meantime however, old habits die hard and it is still a dangerous message, in the wrong hands. I'd much rather it be expressed as "make transactions no longer than they need to be."

Cheers,

Tony Davis.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating