dead connection detection

  • In Oracle, one can set

    SQLNET.EXPIRE_TIME= <# of minutes>

    to have the server detect connections where the client has died (e.g. when the client machine was abruptly powered down or the network connection lost without any signal to this effect to the server).

    This allows the database server to automatically close out such connections in a timely manner, releasing any locks or other resources they hold.

    Do any version(s) of SQL Server provide such functionality? If so, how?

    So far everything I find suggests that there is no such functionality in SQL Server -- which would be a major gap in functionality as far as I see it.

  • There are timeouts for clients, and when a client has a query running, there is some communication back to the client. I'm not sure that the timeouts are, but if a client dies, the server will detect it.

  • What if a client grabs a lock, goes "out to lunch" holding the lock but not making further queries for a while -- and during this time the client abruptly dies or the network connection is severed? [In the case in question this scenario is somewhat common and critically important, not something that can just be shrugged off.]

    Some admittedly brief googling suggests that SQL Server won't notice this. Microsoft's closest article is one on finding and fixing this issue manually as an administrator and there is an old article explicitly stating this as a gap in SQL Server as compared to Oracle.

  • How does the client grab a lock? If your application is built to hold locks on tables, even for seconds, you are killing concurrency.

  • The entire purpose of the locks in question is to prevent concurrency of some critical operations in clients -- and to *quickly* notice when a lock is released. These are not "normal" data rows.

    Given this usage it is paramount that the database be able to quickly notice when a client is no longer present -- including when the client has gone away quite silently, e.g. when a network cable is yanked.

    Oracle can be configured to handle this. I am not seeing any evidence that SQL Server can be, which may force an entirely different approach since I need to support both databases.

  • If the connection drops (and if the client goes away it will drop) any open transactions get rolled back. As a result of that rollback any locks are released.

    Test it, it's easy enough to do.

    If however the client starts a transaction, the transaction waits for user input and the user goes away for the week, that will not be dropped. Hence why "No user input in transactions" is a golden rule.

    The entire purpose of the locks in question is to prevent concurrency of some critical operations in clients -- and to *quickly* notice when a lock is released. These are not "normal" data rows.

    What kind of 'locks' are you talking about here? In SQL Server locks protect data against concurrent changes, row page or table.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I'm talking about row locks.

    As for row locks protecting against changes to data, yes, of course they do that.

    They can also be a basis for distributed negotiation, of course. Unfortunately, SQL Server seems ill suited to such usage because there's no mechanism for it to recognize dead connections in any timely manner.

    Of course that's a broader problem. In your client, lock a row, do some computation [not sending/receiving data from the database] -- then suffer a power outage or network cable cut during this computation. Now SQL Server has no idea to drop the connection or undo the lock. Now another client comes along and wants to lock the row and can't. That's a rather nasty situation even when using locks purely to prevent concurrent access to a row of data. There's a Microsoft article on loads of manual ways for an administrator to track down and resolve such an issue, but that's bunk -- the situation should automatically be recognized by periodically checking the connections.

  • jessh (8/25/2012)


    They can also be a basis for distributed negotiation, of course. Unfortunately, SQL Server seems ill suited to such usage because there's no mechanism for it to recognize dead connections in any timely manner.

    As I already explained, if the connection drops, any transaction that connection held will be rolled back, any locks it held will be released.

    It's easy to test, but if you prefer I do it, no problem. SQL 2008.

    I'm going to use SQLCMD as a test.

    One session open from SQLCMD with an open transaction

    Now I can't pull the network connection (both clients and DB server are the same machine), so I'll just close the window (X in the top-right)

    Session has gone from the SSMS query

    If I go and look in Testing DB, that transaction's gone and so's the X row lock that it held.

    If you want to try the same test with network connections, please do. I just don't feel like hauling my laptop out or booting VMs today.

    One major difference between Oracle and SQL server. Many of the things where in Oracle there's settings to change don't have comparable settings in SQL Server because it's taken care of automatically. Doesn't mean SQL doesn't have the feature, just that the settings aren't tweakable.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks for the testing efforts.

    Similar testing efforts worked great with Oracle as well -- up until we actually abruptly powered down a machine or dropped it off the network.

    You make good points, though -- I need to verify that SQL Server doesn't handle this automatically before jumping to conclusions. There is at least one article out there citing this as a gap in SQL Server vs. Oracle, but the article is rather old, so I really need to test this myself.

  • There's a problem sometimes when DTC is involved (distributed transaction coordinator). The actual client goes away but SQL doesn't notice because DTS (which sits between the client and the server) is involved or the client incorrectly terminates the connection and DTC doesn't abort. There are articles on how to fix that and manually kill sessions. That's often due to buggy drivers or data access layers though (Java being a known culprit)

    I have never seen a problem (other than the DTC mentioned above) where a client machine went away for any reason and the connections were left open, transactions open and locks held.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Clients typically don't start a process on the server and hold locks. When an application SELECTs data, and the data is returned, there are no locks held while the user has information on their screen. When they change something in an application, the application submits the INSERT/UPDATE/DELETE and locks are typically held for ms.

    If your application is coded to hold locks somehow, perhaps like this:

    begin transaction

    select col1 from MyTable

    and then later issues a

    update mytable

    set col1 = 4

    where id = 5

    commit transaction

    That's horrible coding. That's isn't a network cable being pulled. If the client dies, SQL SErver will detect it. IF you code like this, you have no concurrency.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply