Deadlock handling performance

  • Though I can write quite complex SQL statements, I am a beginner with concepts like concurrency and deadlocks. I therefore wrote a small DotNet application that uses threads to perform simple updates on a 10 rows table until I was able to steadily produce deadlocks. I then wrote some try..catch statements so the faulty thread could retry and successfully commit its transaction.

    The preliminary results I am getting amaze me. If I go single-thread (and therefore I do not get any deadlocks) then I am able to perform about 1000 transactions per second. Using two threads, I am down to about 100 transactions per second. Using 8 threads, I can barely perform one transaction per second.

    Are those numbers expected? Is deadlock detection and resolution such a heavy process on SQL server? Am I missing something?

  • This may bear correcting by someone more knowledgeable, but here's my understanding of your situation :-D.

    First, were you using multiple threads for the update procedure on the application side, or on the SQL Server side? I'm not sure if doing so on the app side would have the same effect, but, from what I understand, multi-threading a relatively simple query in SQL Server tends to perform poorly.

    This results from SQL Server having to distribute your query across the threads, run the query bits in the threads, and then reassemble the threads into your result; if the query you ran was simple, the parallelization actions will actually take longer than the query itself would, and in some cases, it increases the execution time by quite a bit!

    In the meantime, the query will sit and wait, and potentially block other queries that are trying to act on the source data. This can certainly lead to deadlocking, especially when your other queries are behaving similarly.

    It's for this reason that the cost threshold for parallelism is recommended to be set to a higher number than the default of 5, usually to 30 or 50, with further adjustments based on testing. If unnecessary parallelization is done, the server's performance could suffer for it.

    It's also probably for this reason that you saw such bad performance as you increased the thread count; the more threads you added, the more overhead began to factor in from having to do extra work on the result set.

    Hopefully I didn't mangle the actual workings too badly! If I did, someone will be along shortly to correct me thoroughly 😀

    - 😀

  • I am running the threads on a client machine and each thread uses its own SqlConnection component. As far as I know, this is equivalent of having several computers running one thread each. I did not alter any of the properties of the SQL Server, that otherwise performs very well.

  • Ah, gotcha! The majority of my post above can thus be discarded 😛

    If you've got a table of only 10 rows, it depends quite heavily on what sort of updates you were doing; if the updates on each thread were the same, and they were all being fired simultaneously, a good bit of blocking would indeed occur, with some deadlocking if different updates snagged bits that the others were going to try to modify.

    Similarly, it also matters quite a bit what the updates you're running are doing; if some or all of the updates are modifying the entire table, those will lock the whole table down until they're done, which will cause the other queries to sit and wait in the meantime.

    Granted, it does seem slightly out of the ordinary for bits of the process to be deadlocking in such a situation; what sort of updates are you doing? It may be that some of the updates are lingering around for a bit too long, though I can't say anything definite on the matter. If it's no trouble, could you provide the scale of the task you're running as a test (how many threads and server specs, primarily), and a sample update or two?

    - 😀

  • Are you getting deadlocks or are you getting blocks. I'm unclear based on the information.

    But, no, the situation you're describing sounds quite abnormal. I'd need to see more about the situation to understand it completely. But, if you're accessing the table with a read and an UPDATE, then you're hitting a situation, for 10 rows, where deadlocks are likely. Thing is, most people aren't running multi-threaded apps on 10 row tables. They're running on 10 million row tables where the distribution of the data offsets the likelihood of deadlocks.

    Deadlocks are fundamentally a performance issue. If all the transactions completed instantaneously, you'd never see a deadlock. So, performance of the system does affect deadlocks. Further, rollbacks from deadlocks requires additional processing which makes the likelihood of performance problems go up. As you get more deadlocks, you get worse performance, leading to more deadlocks.

    Things you can do. Get a bigger table. 10 rows just isn't a realistic testing size. Second, try out read committed snapshot isolation level instead of the default read committed. It puts additional load on tempdb, so watch for that. You could also try putting a update lock on any read statements you have as part of the query. That will prevent some deadlocks.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I am getting deadlocks (SqlException 1205). I understand that my test case is highly hypothetical but I wanted to have deadlocks for sure.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply