Blog Post

Myths and Misconceptions about Transaction Isolation Levels

,

(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan, where you are getting week by week via email all the essential knowledge you need to know about performance tuning on SQL Server.)

In todays blog posting I want to talk about myths & misconceptions about the various Transaction Isolation Levels that exists in SQL Server. I mainly want to talk about the following topics:

  • Transaction Isolation Levels – What?
  • NOLOCK never blocks!?
  • Read Committed doesn’t holds locks!?
  • Key Range Locks are specific to Serializable!?
  • So let’s get started by first laying out the foundation about Transaction Isolation Levels in SQL Server.

    Transaction Isolation Levels – What?

    Everytime when I’m on customer side, and troubleshooting crazy SQL Server problems (or performing SQL Server Health Checks), the root cause sometimes lies in the locking/blocking behavior of the database. When you have one bad query (maybe you are missing a very important index), the performance of your whole database can go down.

    Have you ever tried starting a new transaction (without ever committing it) that acquires an exclusive lock the main table of your production database? Trust me: the performance and the throughput of your database will just die – in front of your eyes!!! So my Pro Tip to you: don’t try it ;-)

    When we are talking about locking & blocking in any relational database, we also have to talk about the various Transaction Isolation Levels that the DBMS supports (in our case – SQL Server). SQL Server supports 2 kinds of concurrency models: the “old” pessimistic one, and the “new” optimistic one that was introduced with SQL Server 2005 – almost 9 years ago…

    Today I want to concentrate on the “old” one. In the old pessimistic concurrency model, SQL Server supports 4 different Transaction Isolation Levels:

  • Read Uncommitted
  • Read Committed
  • Repeatable Read
  • Serializable
  • I don’t want to go into the deep internals about every Transaction Isolation Level, but I want to give you a brief overview what every Isolation Level is doing internally, because we need that information for the subsequent sections of this blog posting.

    When we are looking on a high level view onto SQL Server, you have transactions that are reading data (SELECT queries), and you have transactions that are changing data (INSERT, UPDATE, DELETE, MERGE). Every time when you are reading data, SQL Server has to acquire Shared Locks (S). Everytime when you change data, SQL Server has to acquire Exclusive Locks (X). And both locks are incompatible to each other, means readers are blocking writers, and writers are blocking readers.

    With the Transaction Isolation Level you can control now, how long a reader will hold its S locks. Writers will always acquire X locks, and you can’t influence them. By default SQL Server uses the Isolation Level Read Committed, which means SQL Server acquires a S lock, when you read a record, and and when the record was read, the S lock is released. When you are reading row by row, you are acquiring and releasing the needed S locks row by row.

    When you don’t want that readers are acquiring S locks (which isn’t really recommended), you can use the Isolation Level Read Uncommitted. Read Uncommitted just means you are able to read dirty data – data that isn’t yet committed. It’s blazingly fast (no one else can block you), but on the other hand it’s very dangerous, because it’s uncommitted data. Just think about, what happens if the uncommitted transaction is aborded after you have read the data: you have data in your hand that doesn’t even logically existed in your database. Your hands are very dirty now. I’m seeing a lot of cases during my consulting engagements where people are using Read Uncommitted or the NOLOCK query hint to get rid of blocking situations in SQL Server. But that’s not the preferred way to handle blocking. And as you will see later, even NOLOCK can block…

    In Read Committed you can have so-called Non Repeatable Reads, because someone else can change the data, when you are reading the data twice in your transaction. If you want to avoid Non Repeatable Reads, you can use the Isolation Level Repeatable Read. In Repeatable Read SQL Server holds the S locks until you end your transaction with a COMMIT or ROLLBACK. Means nobody else can make changes to your read data, and you are getting Repeatable Reads for your transaction.

    In every discussed Isolation Level so far, you are also always able to get so-called Phantom Records – records that can appear and disappear in your result set. If you want to get rid of these phantom records, you have to use the Isolation Level Serializable, which is the most restrictive one. In Serializable SQL Server uses a so-called Key Range Locking to eliminate phantom records: you are locking complete ranges of data, so that no other concurrent transactions can insert other records to prevent phantom records.

    As you can see from this description, the more restrictive your Isolation Level will be, the more it will hurts the concurrency of your database. So you have to choose the right Isolation Level very wisely. Running always in Read Committed doesn’t make sense, running always in Serializable doesn’t make sense. It depends, as usual ;-)

    By now I have laid out the foundation about Transaction Isolation Levels in SQL Server, and now I will show you 3 different cases, where my descriptions from above will not match. In some specific cases SQL Server has to guarantee the correctness of your transactions by changing the Transaction Isolation Level for specific SQL statements under the hood – without any intervention from you. Let’s start…

    NOLOCK never blocks!?

    As I have described in the introduction, the NOLOCK query hint also prevents for a specific SQL statement that S locks are acquired during the reading of records. This will make your SQL statements very fast, because the SELECT query can’t be blocked by any other transaction. I’m also referring to NOLOCK as the Turbo-Booster in SQL Server.

    But unfortunately NOLOCK can also block when you are dealing with concurrent DDL statements (Data Definition Language) like ALTER TABLE. Before we understand this behavior we need to have a more detailed look on DDL statements, and what happens internally in SQL Server, when we are running a simple SELECT query.

    When you are changing a table with the ALTER TABLE DDL statement, SQL Server acquires a so-called Schema Modification Lock (Sch-M) on that table. When you are running now at the same time a simple SELECT query against the same table, SQL Server has to compile in the first step a physical execution plan. And during the compilation phase SQL Server needs a so-called Schema Stability Lock (Sch-S).

    And both lock (Sch-M and Sch-S) are just incompabile to each other! This means that even a NOLOCK statement can block, because in the first step you have to compile an execution plan. So your NOLOCK statement blocks even before SQL Server knows the physical execution plan. Have you ever thought about that behavior when you have upgraded your database schema in production? Just think about it… Let’s demonstrate this behavior with a simple example. In the first step I’m creating a new table and inserting some records into it:

    -- Create a new test table
    CREATE TABLE TestTable
    (
    Column1 INT,
    Column2 INT,
    Column3 INT
    )
    GO
    -- Insert some test data
    DECLARE @i INT = 0
    WHILE (@i < 10000)
    BEGIN
    INSERT INTO TestTable VALUES (@i, @i + 1, @i + 2)
    SET @i += 1
    END
    GO
    

    And afterwards we are just starting a new transaction and and adding a new column to our table by executing a ALTER TABLE DDL statement:

    -- Begin a new transaction and do some work
    BEGIN TRANSACTION
    -- Add a new column
    -- DDL statements require a Sch-M lock on the objects that are modified.
    -- In this case, the table "TestTable" gets a Sch-M lock (Schema modification lock)!
    ALTER TABLE TestTable ADD Column4 INT
    

    As you can see from the previous code, the transaction is still ongoing, and isn’t yet committed. So let’s open a different session in SQL Server Management Studio and execute the following SQL statement:

    -- The statement is now blocking, even with the NOLOCK query hint!
    -- SQL Server has to compile the query, and requests a Sch-S lock (Schema Stability lock).
    -- This lock is incompatible with the Sch-M lock!
    SELECT * FROM TestTable WITH (NOLOCK)
    GO
    

    As you will see immediately, the SQL statement will not return a result, because it’s blocked by the other active transaction – even with the NOLOCK query hint! You can further troubleshooting this blocking situation by querying the DMV sys.dm_tran_locks for both sessions. As you can see the Sch-M lock blocks the Sch-S lock.

    Using the NOLOCK query hint or the Transaction Isolation Level Read Uncommitted is no guarantee that your SQL statement will complete immediately. In the past I have worked with a lot of customers who have struggled with that specific problem.

    Read Committed doesn’t hold locks!?

    As you have learned earlier in the introduction about the Read Committed Isolation Level, a S lock is only held during the reading phase of the record. This means that the lock is immediately released as soon as the record is read. So when you reading data from a table, you have at any point in time only one S lock – for the record that is currently processed. This description is only true when as long as your execution plans doesn’t has blocking operators – like a sort operator. When your execution plan has such a operator, it means that SQL Server has to create a copy of your data.

    After the copy of the data is done, the original table/index data doesn’t needed to be retained anymore. But creating a copy of the data only scales and performs very well when you are dealing with a small amount of data. Imagine you have a table definition with a column of the VARCHAR(MAX) data type. In that case every row of the column can store up to 2 GB of data. Creating a copy of that data would not really scale, and blow your memory and TempDb away.

    To avoid this scalability problem, SQL Server just holds the S locks until the end of your statement. Therefore it’s not possible that someone changes the data in the mean time (S lock blocks X lock), and SQL Server just references the original, stable, unchanged data. As a result, your transaction behaves like running in the Isolation Level Repeatable Read, which hurts the scalability of your database. Let’s create the following database schema to demonstrate this behavior:

    -- Create a new table
    CREATE TABLE TestTable
    (
    ID INT IDENTITY(1, 1) NOT NULL PRIMARY KEY,
    Col2 INT,
    Col3 VARCHAR(MAX)
    )
    GO
    -- Insert some records into the table
    INSERT INTO TestTable VALUES (1, 'abc'), (2, 'def'), (3, 'ghi')
    GO
    -- Begin a new transaction, so that we are blocking some records in the table
    BEGIN TRANSACTION
    UPDATE TestTable SET Col2 = 1 WHERE ID = 3
    

    As you can see from the script, I’m just creating a simple Clustered Table with 3 records with the Clustered Key values of 1, 2, and 3. In the next step I’m starting now a new transaction, and we are locking the 3rd record in the Clustered Index. This is now just the setup of this demo. In a separate session we try now to read from the table – of course the SELECT statement will block:

    -- This statement only acquires a key lock on the current record
    SELECT Col3 FROM TestTable
    GO
    

    When we look into the DMV sys.dm_tran_locks you can see very nicely that the SELECT statement waits for the 3rd lock on the Clustered Key value of 3. This is just the traditional behavior of the Read Commited Isolation Level, because we have no blocking operator in our execution plan. But as soon as we are introducing a blocking operator (and reading a LOB data type), things will change:

    -- This statement only acquires a key lock on the current record
    SELECT Col3 FROM TestTable
    ORDER BY Col2
    GO
    

    As you can see, we are using now the ORDER BY clause, which gives us a sort operator in the execution plan. Of course, the SELECT statement will block again. But when we look into the DMV sys.dm_tran_locks, you will see a completely different behavior of the Read Committed Isolation Level: SQL Server has now acquired the S locks on the first 2 rows (Clustered Key values 1, and 2), but hasn’t released them anymore! SQL Server holds these locks until our SELECT statement completes to prevent concurrent changes to the underlying data.

    Effectively our SELECT statement is running in the Isolation Level REPEATABLE READ. Just think about that side-effect when you are designing your next table schema, and you want to include LOB data types in one of your main transactional tables…

    Key Range Locks are specific to Serializable!?

    A few weeks ago I have worked with a customer (during a SQL Server Health Check) where they have encountered Key Range Locks in the default Isolation Level Read Committed. As you know from the beginning of this blog posting, SQL Server only uses Key Range Locks in the Isolation Level Serializable. So the question is from where these Key Range Locks were coming from.

    When we have further analyzed the database schema, we have found tables with foreign key constraints where Cascading Deletes were enabled. As soon as you have Cascading Deletes enabled for a foreign key, SQL Server uses Key Range Locks at the child table, when you delete records from the parent table. This makes sense, because the Key Range Locks are preventing the insertion of new records during the Cascading Delete operation. And therefore the referential integrity of the tables are retained. Let’s have a look on a more concrete example. Let’s create in the first step 2 tables and define a foreign key between both tables where Cascading Deletes are enabled.

    -- Create a new parent table
    CREATE TABLE Parent
    (
    Parent1 INT PRIMARY KEY NOT NULL,
    Parent2 INT NOT NULL
    )
    GO
    -- Create a new child table
    CREATE TABLE Child
    (
    Child1 INT PRIMARY KEY NOT NULL,
    Child2 INT NOT NULL,
    -- The following column will contain a Foreign Key constraint
    Parent1 INT NOT NULL
    )
    GO
    -- Create a foreign key constraint between both tables,
    -- and enable Cascading Deletes on it
    ALTER TABLE Child
    ADD CONSTRAINT FK_Child_Parent
    FOREIGN KEY (Parent1) REFERENCES Parent(Parent1)
    ON DELETE CASCADE
    GO
    -- Insert some test data
    INSERT INTO Parent VALUES (1, 1), (2, 2), (3, 3)
    INSERT INTO Child VALUES (1, 1, 1), (2, 2, 1), (3, 3, 1)
    GO
    

    When you now begin a new transaction, and you delete delete the record with the value of 1 from the parent table, SQL Server also has to delete the corresponding rows from the child table (3 rows in our case), because of the Cascading Delete:

    -- Start a new transaction and analyze the acquired locks
    BEGIN TRANSACTION
    -- This statement deletes the record from the parent table,
    -- and the 3 records from the child table
    DELETE FROM Parent
    WHERE Parent1 = 1
    -- SQL Server uses 3 RangeX-X locks, even with the default
    -- Isolation Level of Read Committed
    SELECT * FROM sys.dm_tran_locks
    WHERE request_session_id = @@SPID
    COMMIT
    GO
    

    When you look during that transaction into the DMV sys.dm_tran_locks, you can see that your session has acquired 3 RangeX-X locks – Key Range Locks! These locks are needed by SQL Server to prevent the insertion of new records during the delete operation. As you can see SQL Server has transparently promoted your Isolation Level to Serializable to guarantee the correctness of your transactions.

    Summary

    As you have seen in this blog posting, don’t take anything as granted in SQL Server. I’m always asking people if they know in detail the locking behavior of the various Isolation Levels in SQL Server. If they are answering with yes, I’m just confronting them with the various phenomena we have seen in this blog posting.

    The most important outcome from this is, that SQL Server is able to promote the Isolation Level for a given SQL statement. So don’t think that SQL Server is always running your queries in the Isolation Level that you have set.

    Thanks for reading

    -Klaus

    Rate

    You rated this post out of 5. Change rating

    Share

    Share

    Rate

    You rated this post out of 5. Change rating