SQLServerCentral Article

Locking, Blocking and Deadlocking

,

I wish that I had a dollar for every time someone came to me, all excited and worried, telling me “We’ve got locking going on in this SQL Server instance”. My usual response is “Yep, we sure do. Isn’t it great?!” In this article, we’ll discuss locking in SQL Server, why it’s good, and what happens when it gets out of control.

Locking is a mechanism built in to SQL Server in order to ensure transactional integrity and database consistency between concurrent transactions. It prevents transactions from reading data that has yet to be committed from other transactions, and it also prevents multiple transactions from attempting to modify the same data at the same time. A transaction will place locks on various resources that it is dependent upon, so that other transactions cannot change the resource in a way that would be incompatible with what this transaction is doing. The transaction is responsible for clearing the lock when the transaction no longer needs it.

Since locking is closely coupled with transactions, let’s take a quick look at what transactions do. Think of a transaction as a logical unit of work. This can be as simple as performing an operation on a single row in a single table – or it could be performing an operation on multiple rows, possibly in multiple tables. Perhaps the transaction is inserting a new employee into the employee table. Or inserting a complex sales order, with a summary and line item details, and there are triggers that fire off manufacturing orders for various parts.

Each logical unit of work has four properties that it must do; collectively these are referred to as the ACID properties. They are:

  • Atomicity – the transaction must be atomic: either all of its actions are performed, or none of them. You cannot have it do some, but not the other.
  • Consistency – when the transaction is completed, all of the data must be in a consistent state. All rules must be applied, and all internal structures must be correct at the end of the transaction.
  • Isolation – modifications made by concurrent transactions must be isolated from all other concurrent transactions. The data seen by a transaction will be either the data seen before other transactions have made modifications, or after the other transactions have completed, but not any intermediate state. (If a second transaction is modifying two rows, the first transaction cannot see a change to the first row and the original state of the second row.)
  • Durability – when completed, the changes made by a transaction are permanently stored in the system, and they will persist even in the event of a system failure. At a minimum, this requires that the changes made by the transaction to have been written out to disk in the transaction log file.

Let’s take a quick look at Atomicity in action. The following example inserts three rows into a table:

DECLARE @TransactionTest TABLE (
        ID INT PRIMARY KEY,
        SomeCol VARCHAR(20)
);
INSERT INTO @TransactionTest (ID, SomeCol) VALUES (0,'Row1');
INSERT INTO @TransactionTest (ID, SomeCol) VALUES (1,'Row2');
INSERT INTO @TransactionTest (ID, SomeCol) VALUES (1,'Row3');
SELECT * FROM @TransactionTest;

In this example, each row being inserted is being performed by a separate statement, so each is a separate transaction (assuming the default SSMS settings for implicit transactions). Since a transaction was not specified, each statement is an implicit transaction. The third insert produces a primary key violation (which would leave the database with inconsistent data), so that statement fails. When the table is queried, it can be seen that the first two rows have been inserted.

However, if this data were to be inserted as one statement:

DECLARE @TransactionTest TABLE (
        ID INT PRIMARY KEY,
        SomeCol VARCHAR(20)
);
INSERT INTO @TransactionTest (ID, SomeCol)
VALUES
        (0,'Row1'),
        (1,'Row2'),
        (1,'Row3');
SELECT * FROM @TransactionTest;

In this example, all three rows are being inserted in a single statement. The primary key violation still occurs, so the statement fails. When the table is queried, it can be seen that none of the records have been inserted – the transaction failed, and atomicity requires that none of its actions be performed, so the first two rows have been undone.  In SQL Server, this is called a rollback – the actions that this transaction has performed are rolled back to their original state.

As a transaction runs, it will place various locks on various resources in order to accomplish its task. Unfortunately, it’s not as simple as just placing a lock on a single row, making the change to that row, and removing the lock. If multiple locks are being placed on a page, a lock at the page level may be issued instead. Locks may be placed on multiple rows, either on the same page or multiple pages. Each lock is necessary to protect a resource, however each lock requires memory. If too many locks are being placed, a strain can be put upon the memory in the system.

To alleviate this, the system has a Lock Escalation mechanism, where locks can be placed on a higher level resource to protect everything underneath it. When lock escalation kicks in, the rows or pages that are being locked will be escalated to a table lock (if this is a partitioned table, and if configured to do so, this can go to the partition level before the table level). Note that rows or pages are escalated to table level locks – row locks are never escalated to page level locks.

When the higher level lock is placed, the lower level locks are then released to ease the memory pressure.

As you can see, locking in SQL Server is a normal event, and it is essential to ensure that the data in the database is consistent.

There are various lock modes that a transaction can use to protect itself. Some of these lock modes are compatible with other lock modes, and others are incompatible with each other. When one transaction is running that has locks on resources, and another transaction comes along trying to place incompatible locks on the same resources, then the second transaction has to wait until the first transaction is complete before it can proceed – the first transaction is blocking the second transaction from running.

Considering all that locks do, it is easy to understand that blocking itself is a natural event within SQL Server. If the transactions are short, the blocking may never even be noticed. However, if there is a long-running transaction, then the blocking may really be noticed. If the operations are on a busy table, the application may itself seem to be unresponsive while it waits for queries to return – queries that are being blocked by this long-running transaction.

There may also be blocking issues if there is a hot spot in the database. One well known hot spot is the allocation pages within the tempdb database. If queries use a lot of temporary tables / table variables or spill out of memory, then these pages which track the allocations of these temporary objects will become a hot spot of activity, and as the objects are created, the allocation pages have short-term locks placed on them. If you happen to have a lot of this going on, then this can start blocking other transactions from creating their temporary objects, slowing transactions down.

Faster hardware doesn’t solve the blocking problem – it just shortens the time that the locks are being held. Frequently, the queries involved will need to be optimized (recoded to support optimal retrieval, possibly adding / modifying indexes) for optimal performance. If blocking is caused by a hot spot, a database redesign may even be in order.

The worse kind of blocking is a deadlock. A deadlock occurs when one transaction has a lock on one resource, and another transaction has a lock on a second resource. Each transaction then attempts to acquire a lock on the resource that the other transaction has locked. (It is also possible for there to be a chain of transactions involved, where transaction 1 needs the resource locked by transaction 2, which needs the resource locked by transaction 3, which needs the resource locked by transaction 1.)

When this happens, both transactions will wait forever for the other transaction to finish – they are both dead, waiting for locks to clear. Since they are waiting on each other, they will never clear. SQL Server has a mechanism for detecting deadlocks, and this mechanism will select one of the transactions to be terminated and rolled back. The transaction chosen to be the deadlock victim is dependent upon the deadlock priority of the transactions, and the amount of work necessary be rolled back. If the transactions are running with the same deadlock priority, the transaction with the least amount of work to be rolled back will be chosen as the deadlock victim; if the deadlock priorities are different, the transaction with the lowest deadlock priority will be chosen as the deadlock victim.

Let’s engineer a deadlock and watch how it works in action. In SSMS, open up two query windows. In the first window, run the following code to create a global temporary table and two put two rows into this table. The code then starts an explicit transaction, and updates one of the rows:

-- Create a global temp table and put two rows into it
CREATE TABLE ##DeadlockTest (Col1 INT CONSTRAINT PK PRIMARY KEY);
INSERT INTO ##DeadlockTest ( Col1 )
VALUES  (5),
        (15);
GO
BEGIN TRANSACTION;
UPDATE  ##DeadlockTest
SET     Col1 = Col1 + 10000
WHERE   Col1 = 5;

Now go to the second query window, and enter and run the following code:

BEGIN TRANSACTION;
UPDATE  ##DeadlockTest
SET     Col1 = Col1 + 10000
WHERE   Col1 = 15;
UPDATE  ##DeadlockTest
SET     Col1 = Col1 + 10000
WHERE   Col1 = 5;

Notice that the query does not complete – The second update statement is being blocked by the update statement that was run in the first query window. Now, return to the first window, and enter and run just the following code:

UPDATE  ##DeadlockTest
SET     Col1 = Col1 + 10000
WHERE   Col1 = 15;

At this point, you have created a deadlock. In the first query window, there is a lock on the table on the row where Col1 = 5. The second query window has a lock on the row where Col1=15. The second query window is attempting to place a lock on the row where Col1 = 5, and the first query window is attempting to place a lock on the row where Col1 = 15. If nothing were to intervene, both of these transactions would sit here forever, waiting for the resource that it needs to be cleared. Fortunately, SQL Server detects this deadlock condition, and selects one of the transactions to be the deadlock victim, terminating that statement and rolling back the changes that it has made.

We still need to clean up, so in the query window that was not selected as the victim, run:

ROLLBACK TRANSACTION;

And in the first query window, run the following statement to drop the table:

DROP TABLE ##DeadlockTest;

Summary

In summary, we have a natural progression of events going on during a transaction. The transaction places locks on the various resources it needs to protect the referential integrity and database consistency. These locks will block other transactions from acquiring locks on the same resources until the transaction with the lock clears the lock (by either completing or being rolled back). These actions (locking and blocking) are completely normal events within SQL Server, and are to be expected.

However, if you have a long running transaction, this can create long-term blocking in the database, preventing other work from going on. And if two different transactions enter into a deadlock situation, SQL Server will terminate one of the transactions so that the system is not locked up. Neither of these situations is desirable, and changes will be necessary in order to prevent these from happening again.

And now you can see why it’s so great that there is locking going on in SQL Server.

Reference

http://technet.microsoft.com/en-us/library/jj856598.aspx (SQL Server Transaction Locking and Row Versioning Guide)

Rate

4.4 (30)

You rated this post out of 5. Change rating

Share

Share

Rate

4.4 (30)

You rated this post out of 5. Change rating