The Effect of NOLOCK on Performance

  • Comments posted here are about the content posted at

    When in doubt - test, test, test!


  • Hi Wayne,

    Nice article!

    I would be interested to know if there is a linear relation between the performance gain using NOLOCK and the number of joins

    performed in the query. I suspect that you might see that the effect increases with the number of joins. Also , the effect

    is probably greatest in a busy database where there might be a problem with lock waits - but this would be difficult to test in a

    controlled environment.



  • Hi,

    Have you tried running the queries in a different order (first the NOLOCK variant and then the normal one)?  And running them with MAXDOP 1?

    Anyway, there is a little catch with NOLOCK however.  SQL Server mostly uses unordered clustered index scans when NOLOCK is specified, this can cause duplicate records to be returned when page splits occur while your query is scanning the index.

    Kind regards,


  • sorry but i'm totally unconvinced of the results and I personally think this is probably a bad article to be published as it may misslead.

    Isolation levels are critical to data intergrity and as mentioned above scans with dirty reads are capable of returning duplicated data where duplicates cannot exist - Itzik Ben-Gan demonstrated this to the UK SQL Server User Group last year, if you check out Tony Rogerson's blog he has code which demonstrates this.

    You have not run these tests on a server, sorry but a laptop does not count! I did run a few comparisions on a large clustered server and as far as query plans and io stats from QA the nolock hint made no difference. I couldn't find any queries slow enough to measure time diffs , but I accept that different locking strategies may well impact duration. Possibly running this to compare stats in profiler would give a better indication. Where there is contention ( most oltp apps ) then you probably would make some gains on a table scan, but hopefully you wouldn't want to do table scans anyway.

    Your query is table scanning, which will pretty well always invoke parallelism as the query cost for a table scan on a largish table does this. I agree with Wesley, you should have done this with maxdop specified.

    Please please avid posters who have read this article check out the effects of dirty reads and isolation levels before you start adding nolock hints all through your code as a tuning aid, it's not and should not be used as such unless you really really don't care if your data may be incorrect.

    On the matter of testing it is vital to test on equal o/s, sql editions and some measure of compatible hardware  as a production environment.




    [font="Comic Sans MS"]The GrumpyOldDBA[/font]

  • The issue of CACHE affecting the runtime's + physical read etc also needs to be considered.  True testing would involve using the DBCC DROPCLEANBUFFERS + DBCC FREEPROCCACHE commands between runs.

    I'd also agree w.r.t. seeing results from a system where "test runs" are being subjected to outside influences such as other users.

  • it's a bit of a fallacy that you always need to clear the cache(s) , total of  logical + physical should always be the same regardless of make up. In real terms you'd not spend time tuning queries which didn't remain in cache. clearing cache just gives worst duration/cpu  io would be the same.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]

  • I have to agree with colin here. This is one of those "dangerous" articles that could mislead people. You need to think really twice before using (NOLOCK) I would add that a hint is just that "a hint" not 100% of the time you will get what you asked for

    Also it maybe safer to use READPAST instead of NOLOCK as the first option.

    There are scenarios where it fits the bill but that should have been ( in my opinion ) the message that this article should have proposed.

    Just My $0.02

    * Noel

  • I havent been the contrarian lately, so....

    I tend to use NOLOCK frequently, but I don't write financial apps and I generally understand the data movement that might occur. I think a comparison of READPAST vs NOLOCK would be interesting indeed.

    I think the usefulness varies and I would have guessed at maybe a 5% savings. Maybe its the same thing said a different way, but I use NOLOCK because I know that I dont need a lock and dont want to risk blocking writers, and I dont want to be blocked - which I guess translates to a performance boost because I shouldn't be waiting or causing waits, but I don't use it to get a x% decrease in resource usage (though maybe I should?).

    Couple thoughts from that. One, can we craft a test that would satisfy us all, and then maybe see if the author or someone else is game to dig in deeper? Two, maybe we can get Steve to lean on MS to see if they can explain a bit about the performance implications.

    READPAST has me thinking - should I be using it more?

  • I agree, somewhat, with both sides of the argument on this one... NOLOCK is useful; I've used it to help tweak our internal system here where people were needlessly suffering from blocking.   The concern about misleading people is true as I believe a well written system should not require the extensive use of NOLOCK; but, I'm assuming that to be mislead you have to be a novice with SQL server.  If you're a novice with SQL server you probably don't even know what an isolation level is, or that the default isolation level is read commited which will indeed benefit from the use of NOLOCK in certain, CAREFULLY CONSIDERED, situations.


    There's my 2 cents.... I'm Canadian, so does that make mine less valuable?

    Matt Trevors


  • I second Collin here.  NOLOCK is fine for some things (as Andy points out).  However, it can be dangerous too and one MUST understand the tradeoffs involved before you use it.  This article didn't even hint at the tradeoffs.

    I would suggest that if you really need non-interference between "readers" and "writers" you should use the Read Committed Snapshot Isolation Level available in 2005.  It at least doesn't allow unexpected duplicates and dirty reads.

    If eliminating overhead is the prime motivation, you shouldn't be using a Relational DBMS.  File based storage mechanisims are far more "efficient" for churning through large data sets.  There is a lot of overhead imposed by things like transactional and logical consistency, but for most applications the costs of ignoring these things is far greater.  The problem is that those costs are often not immediately apparent.  You certainly can't write a simple article that purports to show a percentage "improvement" -or loss - of logical correctness...

    Using NOLOCK indiscriminately WILL bite you, your employer, and your customers eventually and it will bite harder than you might think.


    If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek


  • Just because I have them handy, here are a few links to discussions on this topic...

    Ryan Randall

    Solutions are easy. Understanding the problem, now, that's the hard part.

  • I have to chime in as well. If anything I've found overuse of NOLOCK by people who don't really understand why blocking exists.

    Blocking is good.

    It ensures that there are not conflicting versions of data at any given time. In some instances that's unimportant and that's when NOLOCK can come in handy. But I've found unfortunately that people use locking hints as a way to artificially increase performance on already poorly written queries. NOLOCK can be good, but it should be used with a lot of forethought, IMO.

    This article does a great job of showing how NOLOCK works and how you can realize performance benefits from using it. I think the article would have benefited from a good warning about the reasons for not using NOLOCK, however.

  • I have to take issue with the following statement:

    When the NOLOCK hint is included in a SELECT statement, no locks are taken when data is read. The result is a Dirty Read, which means that another process could be updating the data at the exact time you are reading it.

    A dirty read is not the result of not issuing locks during a Select query. A dirty read is the result of not honoring locks issued by update/insert/delete queries.

    However, a Select query (without using nolock) does do both. It honors locks by other queries and issues locks of its own. The locks it issues are shared locks, meaning other queries requesting a shared lock can still read the data.

    If you need an exclusive lock (i.e., to perform an update), a shared lock will block the exclusive lock. On a system that is heavily read and only ocassionally updated, not using nolock can cause extensive blocking and often time outs. This is particularly true when the lock count is high enough to cause SQL Server to escalate the exclusive lock request to a table lock.

    So when to use nolock is definitely a judgement call.

    My blog: SQL Soldier[/url]
    SQL Server Best Practices:
    SQL Server Best Practices
    Twitter: @SQLSoldier
    My book: Pro SQL Server 2008 Mirroring[/url]
    Microsoft Certified Master: SQL Server, Data Platform MVP
    Database Engineer at BlueMountain Capital Management[/url]

  • I love Aaron Ingolds' quote to be honest: "Blocking is good."

    Many people tend to forget this 🙁

    I too see many situations where people use NOLOCK to compensate poor design and that is always a bad thing.

  • Lol, sorry if I don't get out much, but that Canadian penny jock is incredibly funny.  This is the first time I had heard it, I will have to use that sometime.

Viewing 15 posts - 1 through 15 (of 88 total)

You must be logged in to reply to this topic. Login to reply