deadlock vs. locking

  • Hello,

    We have a database transaction. During this transaction, the database is locked so that other process can't access the
    database. 90% of the time, the process which is blocked simply waits for the transaction to end. When it ends, they can
    access the database again. However, the other 10% of the time, we get a deadlock error:

    "Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim."

    Under what conditions does a database transaction block another transaction, and under what conditions does it become
    deadlocked with another transaction (resulting in one of the transactions being killed)?

  • Blocking is completely normal behaviour for a database but the length of time a process spends being blocked can usually be reduced by using performant code. Blocking happens as a transaction needs to happen in isolation (have a look at the ACID properties of a transaction) so no other transaction will be able to access the same data due to locks being held on it.

    Deadlocks occur when 2 transactions have locks on the same data and therefore are waiting for each other to complete and release the locks. They would stay in that state indefinitely unless one was killed in order to allow the other to complete. The process that is killed is the one with lowest rollback cost.

     I hope I've explained this clearly but have a look at books online for more detail.

    Thanks

  • A deadlock occurs when
    tran 1 has a lock on object A and cannot continue without also obtaining a lock on object B
    and
    tran2 has a lock on object B and cannot continue without also obtaining a lock on object A

    You state "During this transaction, the database is locked so that other process can't access the
    database". This shouldn't happen. Can you provide more details?

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • junk.mail291276 - Thursday, March 22, 2018 4:31 PM

    Hello,

    We have a database transaction. During this transaction, the database is locked so that other process can't access the
    database. 90% of the time, the process which is blocked simply waits for the transaction to end. When it ends, they can
    access the database again. However, the other 10% of the time, we get a deadlock error:

    "Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim."

    Under what conditions does a database transaction block another transaction, and under what conditions does it become
    deadlocked with another transaction (resulting in one of the transactions being killed)?

    First, transactions and blocking are not scoped at the entire database level. Read up more about how [transaction scope], [lock granularity], and [isolation level] works.

    To avoid deadlocks, keep the transactions as short as possible. In other words, don't BEGIN TRAN, update data, select a bunch of data that takes a long time, and then COMMIT TRAN several minutes or hours later. With long running transactions, the window of opportunity for multiple transactions to overlap and become entangled, thus resulting in a deadlock, is much more likely. Instead, first select what you need, perhaps into a temp table, and then only perform the update operations within the transaction block.

    Also, when bulk updating millions of rows, do so in batches of 100,000 rows at a time, with a commit in between, so each transaction is shorter and hold it's locks on a smaller subset of the table. If you're updating more than a few hundred thousand rows at a time, SQL Server may even escalate the lock from page level to table level.

    As for keeping users out of the database during off hour maintenance or deployments, you can alter the database to restricted user mode (recommended over single user mode), meaning that only members of sysadmin role can connect, and then set status back to muli-user mode when done.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Northern Soul wrote:
    "I hope I've explained this clearly but have a look at books online for more detail."

    Yes, perfectly clear, but see my follow up question below...

    ChrisM@Work wrote:
    "You state "During this transaction, the database is locked so that other process can't access the
    database". This shouldn't happen. Can you provide more details? "

    Sorry, that was badly worded. I know that the first process is locking at least at the page level, but not the whole database. Hope that makes more sense. More on this below...

    Eric M Russell wrote:
    "To avoid deadlocks, keep the transactions as short as possible. In other words, don't BEGIN TRAN, update data, select a bunch of data that takes a long time, and then COMMIT TRAN several minutes or hours later. With long running transactions, the window of opportunity for multiple transactions to overlap and become entangled, thus resulting in a deadlock, is much more likely. Instead, first select what you need, perhaps into a temp table, and then only perform the update operations within the transaction block. "

    There's a good chance this is what's causing the deadlock. Please tell me if this scenario sounds plausibles:

    Process 1 begins a transaction in which a large amount of data is being written to the database. At first, it locks at the row level. Then, almost immediately after process 1 begins to write data to the database, process 2 begings to read data from the database. The data it reads is completely separate from the data being written by process 1, so there should be no overlap between the two processes in terms of rows. However, once a critical number of rows begins to be locked by process 1 (because there is a large amount of data to be written), the locking gets escalated from the row level to the page level. The page it selects happens to overlaps with some of the rows being read by process 2. Because process 2 was granted access to these rows before the locking was escalated to the page level, it ends up in a deadlock with process 1 rather than just being blocked by process 1.

    ^ Does this make sense? I know that I am much more likely to encounter a deadlock (with process 2 being killed) if I start process 2 immediately after process 1 has begun to write to the database (within a second or so), but if I leave it for several seconds, process 2 is only blocked, not deadlocked.

  • junk.mail291276 - Monday, March 26, 2018 12:46 PM

    Northern Soul wrote:
    "I hope I've explained this clearly but have a look at books online for more detail."

    Yes, perfectly clear, but see my follow up question below...

    ChrisM@Work wrote:
    "You state "During this transaction, the database is locked so that other process can't access the
    database". This shouldn't happen. Can you provide more details? "

    Sorry, that was badly worded. I know that the first process is locking at least at the page level, but not the whole database. Hope that makes more sense. More on this below...

    Eric M Russell wrote:
    "To avoid deadlocks, keep the transactions as short as possible. In other words, don't BEGIN TRAN, update data, select a bunch of data that takes a long time, and then COMMIT TRAN several minutes or hours later. With long running transactions, the window of opportunity for multiple transactions to overlap and become entangled, thus resulting in a deadlock, is much more likely. Instead, first select what you need, perhaps into a temp table, and then only perform the update operations within the transaction block. "

    There's a good chance this is what's causing the deadlock. Please tell me if this scenario sounds plausibles:

    Process 1 begins a transaction in which a large amount of data is being written to the database. At first, it locks at the row level. Then, almost immediately after process 1 begins to write data to the database, process 2 begings to read data from the database. The data it reads is completely separate from the data being written by process 1, so there should be no overlap between the two processes in terms of rows. However, once a critical number of rows begins to be locked by process 1 (because there is a large amount of data to be written), the locking gets escalated from the row level to the page level. The page it selects happens to overlaps with some of the rows being read by process 2. Because process 2 was granted access to these rows before the locking was escalated to the page level, it ends up in a deadlock with process 1 rather than just being blocked by process 1.

    ^ Does this make sense? I know that I am much more likely to encounter a deadlock (with process 2 being killed) if I start process 2 immediately after process 1 has begun to write to the database (within a second or so), but if I leave it for several seconds, process 2 is only blocked, not deadlocked.

    Lock escalation goes from row or page to table level, not from row to page. If a reader is somehow blocking a writer, then the reader will should be the deadlock victim, because process with least work to rollback is by default victim. In this case you can implement retry logic for your reader process. I would reccomend lowering the batch commit size of your process that inserts rows.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Thanks Eric,
    I will think about adding retry logic to my deadlock victim process.
    And thanks everyone for your feedback. I think I understand this a bit better now.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply