Have you ever been told that you should use NOLOCK? I have, and its usually been with OLTP systems where the person making the recommendation wants to “improve” the performance of the data platform by reducing and removing, the locks that are held for SELECT queries.
Those that know better will often point to the fact that NOLOCK allows dirty reads. Data that has not been committed can, and will, be returned. In straight terms, an insert, update, or delete that is in process will considered for the result set, regardless of the state of the transaction. This can mean returning rows from an insert that may potentially rollback, or returning rows that are being deleted.
What about times a query is returning rows that you know for certain are not being modified? Suppose that you are updating rows for one client and need to return rows for a second client. In this case, will the use of NOLOCK be safe? The data isn’t being modified for the second client, so you might assume that the returning that data won’t have an opportunity for dirty data. Unfortunately, this assumption is incorrect.
NOLOCK Side Effect Demo
Before explaining how the assumption is incorrect, let’s first demonstrate the problem. We’ll start with a table for storing information for a client that represents points that a client has in the system. In the code below, we’ll add in 10 rows for the client with personid equal to –1. Then the script will loop through an insert rows for random clients and adding in more points. Since the table uses an uniqueidentifier for the clustering key, the inserts will cause frequent page splits.
IF OBJECT_ID('dbo.Points') IS NOT NULL DROP TABLE dbo.Points; CREATE TABLE dbo.Points ( id UNIQUEIDENTIFIER PRIMARY KEY DEFAULT ( NEWID() ) ,personid INT ,dollars MONEY ,filler_data CHAR(1000) DEFAULT ( 'x' ) ) INSERT INTO dbo.Points ( personid, dollars ) VALUES(-1,10),(-1,10),(-1,10),(-1,10),(-1,10),(-1,10),(-1,10),(-1,10),(-1,10),(-1,10) WHILE 1 = 1 BEGIN INSERT INTO dbo.Points ( personid, dollars ) VALUES ( CAST(RAND() * 10000 AS INT), 10 ) END
In a second session, execute the query below that returns a summary of the data for the client where personid equals –1. In this script, the query use NOLOCK since we are working under the premise that it is safe to use the hint on rows that we know are not affected by the ongoing inserts.
DECLARE @points MONEY WHILE 1 = 1 BEGIN SET @points = ( SELECT SUM(dollars) FROM dbo.Points (NOLOCK) WHERE personid = -1) IF @points <> 100 BEGIN SELECT @points AS current_total BREAK END END
On the first execution of the second script, the loop exited almost immediately. Examining the image below, the first time it ran in less than a second. Instead of returning a total of $100 in points, the result was $90.
Running the script for a second time, the same script returns different results. Instead of the $90 returned on exit last time, the total is now $110 in points. Looking at the results, this time it took 7 seconds to get an incorrect total.
While these two executions did return rather quickly, there were a few subsequent executions that ran for 7, 17, and 30 minutes before incorrect results were encountered. Even though the length of time increased, the risk for incorrect results was equal and there is no guarantee that correct or incorrect will results won’t be returned at any time.
The cause for the incorrect results in the example is the occurrence of page splits from the data modifications. While the secondary clients data wasn’t being changed, the pages that contained this data were being changed. When the pages were being split, all of the data on the pages is affected, even those that are not related to the actual rows being changed. And depending on the state of the page split and how the data is currently stored, this may result in either more or less rows data being returned.
The typical protection for this type of operation is the use of READ COMMITTED; which prevents dirty reads from occurring. Of course, this uses locking, and it should, to guarantee that the dirty reads do not occur. But this provides that guarantee that the data returned is the data that is committed, regardless of whether a page split is moving around your data.
One of the the main concerns that people have with locking, is the blocking that is associated with two users trying to access the same locked resource. As an alternative to using NOLOCK, try using SNAPSHOT isolation level instead. Through this, readers won’t block writers; which will reduce the amount of lock blocking on your data platform.
A Little Homework
Now it’s time to put on your thinking cap. Consider the calls that you’ve received from time to time where users complain about incorrect results on a report; which run fine the next time. Or consider the order that is placed that exceeds the available cap for the user. Or all of the other issues that arise from day to day where there seems to be an unreproducible bug in your application that relates to incorrect data being returned.
Are you using NOLOCK? Did a page split change the location of the data being queried, changing the results in-flight? Given the issues, which I am sure you all have, is it really ok to use NOLOCK?
Feel free to comment below with your thoughts.