read UNCOMMITTED run slower than read COMMITTED

  • Hello

    I have the following query which runs considerably slower when we change the isolation level to read uncomitted, I just wanted to understand why. I have attached the 2 query plans and scripts

  • matthew.peters (11/22/2012)


    Hello

    I have the following query which runs considerably slower when we change the isolation level to read uncomitted, I just wanted to understand why. I have attached the 2 query plans and scripts

    Not sure what the problem is but I can't open you're query plan with SQL Server 2008 client tools (SSMS).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • OK I just uploaded them again, whats the error when you try to open them?

  • idk about Jeff, but it seems to be a 30 KB xml file that when I open up using management studio, just brings up a blank page. I do have one question however: have you tested several times for consistency ? Just to make absolutely certain that the behavior is consistent(Read committed always results in faster exec time) ?

    Cant understand how that could be the case. Read uncommitted should have(if anything) lesser overhead because of the locks. Cant understand why there would be a difference in the execution plans at all.

  • Why dont you save them as .sqlplan files and upload them again ?

    Also - upload plans for both. The faster plan and the slower one.

  • Not sure what the problem is here, I can open the attachment fine, if you have a look it is saved as a .sqlplan file. It may be the version of management studio I'm on, I'm running 2008 R2. I have put all the files into a rar file and uploaded them, if this doesn't work the only thing I can think of is to take a screen shot, let me know how you go

    To answer your first question I ran it several times and got the same result.

  • First off why is the statement wrapped in sp_cursorprepexec?

    Second, the parameters supplied in both sql scripts are different so will affect query time.

    Third, read uncommitted brings in a whole different behavour so you could be running into things like dirty reads etc which will change the behavour of the execution.

  • This is often because the non-read-uncommitted SELECT is blocked by INSERT/UPDATE/DELETE operations currently ongoing. You can use sp_whoisactive (an awesome free tool on sqlblog.com) to observe this behavior if it is the issue.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • I've seen Read Committed run faster than Read Uncommitted in a couple of circumstances.

    One is if the table is heavy on data-modification, and Read Committed Snapshot Isolation has been enabled. In that case, Read Committed will read from snapshots, which can be faster, especially if tempdb is on different drive media than the main DB (since tempdb is where the snapshots are held for RCSI). Can be faster, can be slower, depends on a lot of variables, but definitely can be faster in some situations.

    The other is heavy updates causing lots of page splits. The Read Committed version will lock the pages it needs, preventing concurrent splits. This can make a big difference in I/O time on the query, especially if it's over a range of data that parallels the leading edge of one of the indexes. The RU query will have to deal with pages being split while it's reading, potentially reading the same data two or more times, which can result in the total query being slower and returning incorrect results, both at the same time.

    Either one is kind of rare, but they can happen.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • First off why is the statement wrapped in sp_cursorprepexec?

    Second, the parameters supplied in both sql scripts are different so will affect query time.

    Third, read uncommitted brings in a whole different behavior so you could be running into things like dirty reads etc which will change the behavour of the execution.

    The statement is wrapped in a sp_cursorprepexec because this is prepared statement that I have captured through profiler coming from the application

    If you check the statements you will find that the parameters on both are the same

    I just did some more testing and found that the read committed returns no results set yet the read uncommitted returns a result set??? (dont know why I didn't pick this up in the first place) Why would this be, they are exactly the same statement but just with a different isolation level. The database I'm testing on has no other users connected ??

  • This is getting really interesting. Can you please post the exact profiler trace of both ? (The read uncommitted and the read committed). This would obviously explain the faster run times with read committed because there was no data !

  • Never mind about the SQL - I just downloaded the rar file and was able to extract the .sql files.

  • Could there be something as silly as an open transaction that hasnt committed in this database ? A simple "SELECT @@TRANCOUNT " or a DBCC OPENTRAN should tell you.

    The plans are even more baffling - the committed plan seems to have a clustered index insert in the end into a permanent table in tempdb that I cannot even see in the query. Does the object tempdb.CWT_PrimaryKey ring any bells for you ? Is there an "insert ...select" thats hidden somewhere and showing up in the plan ?

  • I've checked for open transactions and there are none

    I saw the clustered insert in the plan but have no idea why its there, I dont know what tempdb.CWT_PrimaryKey and there's no insert selects in the query. However I should mention that the "PC0".pzPVStream AS "PC0pzPVStream" is a BLOB column, when I took out this columkn along with the sort by clause the clustered index insert was removed along with the sequence project, segment and sort from the query plan, and the queries had the same run time

  • While I dont know whats specifically going on in your case, a few more questions / suggestions come to mind, namely:

    1. If you're using the old text / image datatype, have you tried using the "text in row" option and setting it to the max possible size of your blob value ?(If that happens to be below 7k, that is).

    2. Converting the column to type nvarchar(max) or nvarbinary(max) and see if the execution plan changes.

    3. You said that removing the blob column produces same run exec plans. Do they also produce the same results ? You had mentioned before that you saw zero rows for read committed.

    Nonee of the above still explains how uncommitted is slower than read committed I'm afraid.

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply