Minimizing Locking on Update statements

  • Hi All,

    I have a main table called Stock for example.

    One process ( a service ) inserts data into this table , as well as updates certain fields of the table periodically.

    Another process ( SQL Job ) updates the table with certain defaults and rules that are unknown to the service - to deal with some calculations and removal of null values where we can estimate the values etc.

    These 2 processes have started to deadlock each other horribly.

    The SQL Job calls one stored procedure that has around 10 statements in it. This stored proc runs every minute. Most of them are of the form below - the idea being that once this has corrected the data - the update will not affect these rows again. I guess there are read locks on the selecting part of this query - but usually it updates 0 rows - so I am wondering if there are still locks taken ?

    UPDATE s

    SET equivQty = Qty * ISNULL(p.Factor,4.5) / 4.5

    FROM Stock s

    LEFT OUTER JOIN Pack p

    on s.Product = p.ProductId

    AND s.Pack = p.PackId

    WHERE ISNULL(equivQty,0) <> Qty * ISNULL(p.Factor,4.5) / 4.5

    The deadlocks are always between these statements from the stored procedure - and the service updating rows. I can't really see how the deadlocks occur but they do.

    Some suggestions from developers have been to put use nolock or to use snapshot isolation etc. My gut is that these are the wrong options and we need to find the real cause.

    Another suggestion has been to try and use an exists before the update as below

    IF EXISTS( SELECT based on above criteria )

    ( UPDATE as before )

    Does this reduce the locking at all ? I don't know how to test the theory - i added this code to some of the statements, and it didn't seem to make much difference ?

    Is there a way to make a process ( in my case the stored procedure ) - give up if it can't aquire the locks rather than being deadlocked - which leads to job failures and emails etc ?

    We are currently trying to filter down the data that is updated to be only the last few months - to reduce the amount of rows even analyzed - as the deadlocking does seem to be impacted by the number of rows in the tables.

    Any more advise etc would be great.

    I would love to post some code snippets, but its not something that I can easilly mock up really..

    Thanks all,

    Steve

  • Deadlocks are always about performance. The faster an update occurs, the less likelihood of deadlocks. So, first things first, make sure that your updates are performing well. Are they doing a good clean index seek to retrieve the row being updated, or are they doing scans? If they're doing scans, once a minute, I'm not shocked that you're seeing deadlocks. Next, deadlocks are also about access order. Meaning, I read from TableA and update TableB while the other process is reading from TableB and updating TableA. Make sure your processes, as much as possible, are accessing the tables all in the same order.

    Snapshot isolation level is a great way to reduce contention on the system. It does add load to tempdb, so be sure you're ready for that. However, I'm a strong advocate of using snapshot isolation on most systems (unless there are reasons not to due to existing load, stuff like that). It doesn't hurt normal operations of the system, so this is a good choice. NOLOCK is evil and should be avoided. It leads to dirty reads, which means missing or duplicate rows, which could be even more problematic than deadlocks.

    Another thing to do with deadlocks is to use TRY/CATCH and a retry process to rerun a query that has been deadlocked. It doesn't solve the root problem, but it does prevent errors going back to the client and completes the transaction.

    Another option if you have lots of reads prior to your updates would be to put a query hint UPDLOCK on those reads. I'd reserve this for a last ditch answer to the problem because this increases blocking and resource contention, although it can eliminate deadlocks because the strictest locks are taken immediately, preventing that shared lock/exclusive lock aspect of deadlocks.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey (5/5/2015)


    Snapshot isolation level is a great way to reduce contention on the system. It does add load to tempdb, so be sure you're ready for that. However, I'm a strong advocate of using snapshot isolation on most systems (unless there are reasons not to due to existing load, stuff like that).

    Yup. It's at the point where I recommend it in every performance review I do.

    It's not a guaranteed fix for deadlocks, as writers still block writers in read committed snapshot/snapshot, but it will remove most. Remaining deadlocks will likely be fixable with performance tuning, indexing or minor code changes.

    Steve, any chance you can post the deadlock graph? You can get it from the extended events system health session.

    This may help overall: https://www.simple-talk.com/sql/performance/sql-server-deadlocks-by-example/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks Grant and Gail.

    XDL for deadlock is here

    https://www.dropbox.com/s/aonjva56erfe9ft/TraceDeadLockXML.xdl?dl=0

    Thanks for all the advise... I am going to check the scans / improve the update times - hopefuly this will make a difference - and will definitely check the order of the updates.

    I also want to enable the snapshot isolation but wanted to know what it the correct way to enable snapshot isolation ?

    I see the MSDN page says:

    ALTER DATABASE MyDatabase

    SET ALLOW_SNAPSHOT_ISOLATION ON

    ALTER DATABASE MyDatabase

    SET READ_COMMITTED_SNAPSHOT ON

    Do I then need to alter any of the queries ?

    I also see notes of some people saying :

    ALTER DATABASE >>DATABASE NAME<<

    SET READ_COMMITTED_SNAPSHOT ON

    GO

    SET TRANSACTION ISOLATION LEVEL SNAPSHOT

    GO

    Thanks for all your help so far.

    Steve

  • -- The WHERE clause must be evaluated for each row in the Stock table

    -- in order to determine which rows qualify for the UPDATE.

    -- The likelyhood of a table lock might be high.

    -- So, determine which rows qualify without locking the stock table for update.

    -- Capture the PK of the stock table, and the new value.

    SELECT s.PrimaryKey, x.NewequivQty

    INTO #Stock

    FROM Stock s

    LEFT OUTER JOIN Pack p

    ON s.Product = p.ProductId

    AND s.Pack = p.PackId

    CROSS APPLY (

    SELECT

    OldEquivQuantity = ISNULL(s.equivQty,0),

    NewequivQty = s.Qty * ISNULL(p.Factor,4.5) / 4.5

    ) x

    WHERE x.OldEquivQuantity <> x.NewequivQty

    -- Then apply the update, touching only the rows which will be updated

    -- The downside? You end up reading the stock table twice.

    UPDATE s SET equivQty = NewequivQty

    FROM Stock s

    INNER JOIN #Stock st

    ON st.PrimaryKey = s.PrimaryKey

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Thanks Gavin / Gail and Grant.

    Here is a link to the deadlock graph. Pretty much always shows the Job being killed and always the same table and key.

    https://www.dropbox.com/s/aonjva56erfe9ft/TraceDeadLockXML.xdl?dl=0

    I am going to analyze each update and ensure it runs as fast as possible, also will try out the idea of reading the rows to be updated before the update. I dont think I have an ordering issue - as my updates are always in the same order - but its definitely something I will hold in mind, and double, check.

    I am enabling snaphshot isolation like this:

    ALTER DATABASE MyDatabase

    SET ALLOW_SNAPSHOT_ISOLATION ON

    ALTER DATABASE MyDatabase

    SET READ_COMMITTED_SNAPSHOT ON

    Anything else I need to do - or do the queries that used to use READ COMMITTED now use the snapshot ?

    Last question I hope... how long does this alter database usually take ? I have people working most days from 6am to 8pm - so should have done it after hours I guess.

    Tried it this morning, and it took 10 minutes and still not finished, so I cancelled it. Will try again later.

    Thanks again for the time to reply , will advise the outcome a bit later.

    Steve

    Thanks,

    Steve

  • You don't need this:

    ALTER DATABASE MyDatabase

    SET ALLOW_SNAPSHOT_ISOLATION ON

    That's a different kind of snapshot isolation that is controlled by each connection and transaction. Read_Committed_Snapshot should be sufficient. You're probably best setting that in off hours though. I haven't changed it on a live system in a while. I'm not sure about the blocking & locking implications, but I'm sure there are a few.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Yeah, running that ALTER DATABASE command requires an X lock on the DB in question, so you effectively have to be the only session using the DB.

    You could run it using WITH ROLLBACK IMMEDIATE to get around that, but you'd obviously want to make sure nothing "important" was running in the DB. I'm paranoid, so I'd probably make it a maintenance window, to be safest.

    Cheers!

  • Jacob Wilkins (5/6/2015)


    Yeah, running that ALTER DATABASE command requires an X lock on the DB in question, so you effectively have to be the only session using the DB.

    You could run it using WITH ROLLBACK IMMEDIATE to get around that, but you'd obviously want to make sure nothing "important" was running in the DB. I'm paranoid, so I'd probably make it a maintenance window, to be safest.

    Cheers!

    Thanks! I really couldn't remember.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Stephen Knott (5/4/2015)


    ...Another suggestion has been to try and use an exists before the update as below

    IF EXISTS( SELECT based on above criteria )

    ( UPDATE as before )

    Does this reduce the locking at all ? I don't know how to test the theory - i added this code to some of the statements, and it didn't seem to make much difference ?

    This will only work where there are no updates at all. If you've got only one qualifying row, you're still going to lock up the Stock table for the duration of a table scan.

    Did you try the code I posted?

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Hi Chris,

    I didn't yet - but I will.

    I have added COMMITTED SNAPSHOT isolation as it was the least invasive - i.e. didn't need to change any code.

    I do like your idea, and will definitely add that type of checking.

    Will let you know the outcome.

    BTW, for others that posted re the COMMITTED SNAPSHOT - the deadlocks have dropped considerably today.

    Will keep updating as I progress.

  • Stephen Knott (5/7/2015)


    Hi Chris,

    I didn't yet - but I will.

    I have added COMMITTED SNAPSHOT isolation as it was the least invasive - i.e. didn't need to change any code.

    I do like your idea, and will definitely add that type of checking.

    Will let you know the outcome.

    BTW, for others that posted re the COMMITTED SNAPSHOT - the deadlocks have dropped considerably today.

    Will keep updating as I progress.

    Is the code more complex than the sample you posted?

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • I wouldn't say much more complicated... there are a few more joins yes... but there are around 10 of those statements in the query... so I didn't think that adding your code to just one of the queries would suffice - and didn't have the time yesterday.

    I will add your idea to all the queries some time today or tomorrow, and will see what difference it makes.

    How would you benchmark ? I am currently using SET STATISTICS ON and OFF and comparing the old proc vs the new one on repeated runs - and looking at total reads ?

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply