Can I get SQL Server to release a lock without using the kill command?

  • I have been searching for threads that address this question and have not found anything. If there is a thread about this please point me to it.

    Anyway I want to know if there is a way to get SQL Server to release a lock without using the kill command. My reason for asking is because this morning I was investigating some deadlocks and thought it would be nice if SQL Server upon identifying a deadlock instead of killing the victim SPID would simply force the victim SPID to release the lock causing the deadlock so that the other SPID could continue and the victim would simply have to wait longer.

    That lead me to wonder if there was a T-SQL command or DBCC to force SQL Server to release a lock that I specify.

    So, does anyone know, Is there a way for me to get SQL Server to release a lock without using the kill command?

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • No.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks for the quick reply. I suspected that might be the answer but wanted to be sure it wasn't just me missing something, like some undocumented DBCC that didn't come up in any of my searches.

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • If you think carefully enough about the reason(s) why locks are taken in the first place then you would easily realize that what you ask would break those reasons/requirements.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • What you're asking for would complicate the lock manager massively, it would also break the isolation (and probably consistency) guarantees which SQL has to provide.

    If a deadlock victim, instead of being killed, had it's locks released, it wouldn't have to wait a bit then carry on. It would have to discard everything back to the beginning of the transaction and start again, as with the locks broken it has no protection against another session (like the other half of the deadlock) changing or reading transient data.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I guess it would have helped if I had explained better.

    The deadlocks I see the most often are with two SPIDS having a page lock on the same index that the other SPID wants. My way of thinking was that as each update is going about its business taking one index page lock after another that they accumulate a number of those before they can commit. So because the deadlock victim only has one page lock the other SPID needed and the deadlock victim potentially has 10, 20, or more page locks the other SPID doesn't need this is why I was thinking that if SQL Server could simply force the deadlock victim to release that one lock and let it keep the 10 or more it already had that it would be able to wait without losing all its progress and then continue once the other SPID is out of the way. Is this the wrong way to think about how SQL Server takes locks on an index it needs to update?

    Based on that simple way of thinking I would think that the reason the locks are taken would not be a problem and the isolation would be preserved as the deadlock victim would have to wait until it could get the lock it was forced to release before it could continue. Similar to as if it had never obtained that one index page lock. So I guess that would mean SQL Server would have to rollback the update changes for that one page of the index that was forced to be released. When it kills everything being done by the deadlock victim then if it had more than 10 page locks on an index it would have to rollback all those changes whereas it would take less time to rollback one page of index updates and then wait longer before being able to finish.

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • Robert W Marda (8/26/2015)


    I guess it would have helped if I had explained better.

    The deadlocks I see the most often are with two SPIDS having a page lock on the same index that the other SPID wants. My way of thinking was that as each update is going about its business taking one index page lock after another that they accumulate a number of those before they can commit. So because the deadlock victim only has one page lock the other SPID needed and the deadlock victim potentially has 10, 20, or more page locks the other SPID doesn't need this is why I was thinking that if SQL Server could simply force the deadlock victim to release that one lock and let it keep the 10 or more it already had that it would be able to wait without losing all its progress and then continue once the other SPID is out of the way. Is this the wrong way to think about how SQL Server takes locks on an index it needs to update?

    Based on that simple way of thinking I would think that the reason the locks are taken would not be a problem and the isolation would be preserved as the deadlock victim would have to wait until it could get the lock it was forced to release before it could continue. Similar to as if it had never obtained that one index page lock. So I guess that would mean SQL Server would have to rollback the update changes for that one page of the index that was forced to be released. When it kills everything being done by the deadlock victim then if it had more than 10 page locks on an index it would have to rollback all those changes whereas it would take less time to rollback one page of index updates and then wait longer before being able to finish.

    I'm pretty tired at the moment so maybe I missed it, but didn't you completely miss the OTHER HALF of the deadlock scenario?? Remember, there is more than one object being referenced in (virtually all) deadlock scenarios all wrapped in transactions. Even if I did miss it, how is the engine to know whether or not ANY of the other work that has been done - or will be done - by the spid will not be hosed by this lock release you think should happen? There is an infinite set of DML scenarios out there, right? Sounds like a tall order to say the engine can still GUARANTEE all of it's transactional guarantees after it picks a lock to "defer".

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Robert W Marda (8/26/2015)


    This is why I was thinking that if SQL Server could simply force the deadlock victim to release that one lock and let it keep the 10 or more it already had that it would be able to wait without losing all its progress

    No, because by relinquishing the lock on the one page it has now lost all transactional guarantees (someone else modified a page after it was read, before it was updated), hence the entire transaction has to be rolled back and restarted. It can't wait to re-acquire the lock, it took the lock before starting the modification to ensure that no one else could modify the page, re-acquiring it won't change the fact that someone else modified a page during an update.

    SQL guarantees atomicity (all operations in a transaction are treated as if they were a single operation), isolation (no other session may see or modify data which is being modified), consistency (the results after a transaction are the same as if that transaction was the only thing running on the server), durability (once committed, changes are part of the permanent database state)

    Your thinking would have SQL violating the first three of those.

    isolation would be preserved as the deadlock victim would have to wait until it could get the lock it was forced to release before it could continue.

    No, isolation would not be preserved, because by releasing a lock, it's now exposed the page it's about to change to changes from someone else. A page containing rows that it's already read and decided are to be updated. Someone else modifies them, it'll have to re-do the read portion of the update entirely to ensure that the rows it's about to change still qualify for the update.

    SQL won't start an update until it has all the locks it needs for the entire operation

    So I guess that would mean SQL Server would have to rollback the update changes for that one page of the index that was forced to be released.

    Violates atomicity, that all changes or no changes result from an operation. SQL cannot ever do a partial rollback.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank you both for your patience and for explaining things I did not understand nor consider as I thought about this. I begin to see how this becomes more complex than I thought.

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply