locking - update statement

  • Considering the case of a 1 mil. rows table that is constantly accessed (online store scenario), is it possible to write an UPDATE statement that will affect lets say 700K rows but still letting users able to run SELECT statements on the table?

  • the users can do a dirty read on the table while it is being updated!! of course their results are not durable after the update.

  • SELECT Idetifier

    INTO #ToUpdate

    FROM Table

    WHERE <Update Criteria>

    WHILE EXISTS(select 1 from ToUpdate)



    SELECT Identifier

    INTO #CurrentUPDATE

    FROM #ToUpdate


    UPDATE Table

    SET ...

    FROM #CurrentUPDATE

    WHERE Table.Identifier = CurrentUPDATE.Identifier

    DELETE FROM #ToUpdate

    WHERE EXISTS (select 1 from #CurrentUPDATE where #ToUpdate.Identifier = #CurrentUPDATE.Identifier )


    WAITFOR DELAY '00:00:01'


  • Hi,

    Sergiy's solution surely has the potential for updating 1000 rows at a time at unspecified physical locations throughout 'table'. This has the potential to lock disparate areas of the table until the transaction is complete.

    If you have a clustered index and use that to order your update result set then you may at least be updating blocks of contiguous records, assuming your filter criteria also return a contiguous result set.

    It might also be worthwhile adding a COMMIT after each update to release the locks created, or setting implict commits on.

    You could use table hints to suggest a locking mechanism to SQL Server,  e.g. WITH (ROWLOCK) or  WITH (PAGLOCK) to use row or page level locking, but SQL server will override these hints and escalate the locking if it is more efficient to do so.


    If it ain't broke, don't fix it...

  • David,

    you're clearly wrong.

    2 days ago I just updated 1,2 mil. rows on production system using this approach.

    No one was locked. No one even noticed. Yes, it took whole day, but I wasn't in hurry.

    I don't begin any transaction there, so I don't need to commit anything. Transaction started by UPDATE statement is committed when UPDDATE completed.

  • Thanks guys, I thought there is an easy way to do the update, something like an update hint, that would direct the update statement to lock only one row at a time not to escallate at pages or table lock.

    I tried


    UPDATE  test1  WITH (ROWLOCK)  SET a = '7' WHERE a IN ('1', '2')

    but when I run

    SELECT * FROM test1  WHERE a = '9'

    on a different connection is still blocked.


  • Who gave you an idea about BEGIN TRAN?

    Kill him.

    UPDATE is a transaction itself.

    If you would bother to read posts above you'd figure out why you SHOULD NOT use transactions.


  • The UPDATE statement has an implicit transaction (when the connection Implicit Transaction is ON), but when you update multiple number of rows in a table, with or without explicitly starting a transaction you will still lock that table.

    The process that you described earlyer I think is good because it give the users time to access the table between batches, but overall I think the table would be lock the same amount of time as with only one big update.

  • As mentioned in the first response in this thread, you can enable 'dirty reads' as a possible solution.

    This can be done at the statement level, using the NOLOCK hint in your select statements or at the connection level by changing the isolation level. 

    Using the batch method suggested by Sergiy will reduce the number of blocks that occur and significantly reduce the time that another request is blocked.  However, this method can greatly increase the time it takes to complete the update.  As you increase the size of your batches, the total time needed to do the update will decrease, but the amount and length of user blocking will increase.

    Only you know your environment and what the priorities for your system are, so you will need to evaluate which is the best solution for you.

    Good Luck!


  • I'm afraid, to enable dirty reads you need to rewrite all SP's (and ad-hoc queries from application?) accessing this table.

    That's not what I'd like to do.

  • Why on earth are you updating 70% of a million row table that is being accessed by users on a regular basis?   That's borderline insanity on the part of whoever designed the system...

    So far as having to rewrite all of the procs to use the WITH (NOLOCK) hint.... not quite true... if you rename the table and create a surrogate view (named the same as the table originally was) with the appropriate transaction isolation level and using only SELECT * from one table, you will come real close to creating a "synonym" like they do in Oracle.  There may be one or two places where you will get a locking clash so I'd recommend a test in a parallel environment first but it may work just fine. 

    If you have triggers, I wouldn't do it...

    Note that the surrogate view is NOT a fix... it's a patch, a work around, and should be very temporary until you can figure out a better way to do things than updating such a large part of a large table with users in constant access...

    Serqiy's method is one of the better ones but I still question the need for the massive update to begin with... somebody missed something big in the design...

    --Jeff Moden

    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".
    "Dear Lord... I'm a DBA so please give me patience because, if you give me strength, I'm going to need bail money too!"

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 11 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply