Stored procedure getting blocked and deadlocked

  • We have a stored procedure that frequently getting blocked in production and sometimes is falling as a victim of deadlock. In normal cases, it executes usually in around 1 sec, and the engine does not report any recommended indexes.

    It has one big select statement that besides tables also joins one view and one table-valued function. It does not have explicit transaction.

    The database has these settings in production:

    snapshot_isolation_level = 0

    is_read_committed_snapshot_on = 1

    I am going to add this statement at the top of the SP, it will prevent or minimize deadlocks:

    SET DEADLOCK_PRIORITY HIGH

    But what I can do about blockings? To have WITH(NOLOCK) hint against each table or SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED on a S.P. level? Each option is not perfect as it will read from dirty pages. Any other ideas?

    Thanks in advance.

  • Is the query optimized? Do you have the necessary indexes? If you provide DDL for the tables & views involved, the proc logic, & execution plan (obfuscate if needed to protect confidential info), experts here can probably help with optimization.

    READ COMMITTED SNAPSHOT ISOLATION has some overhead, but is preferable to NOLOCK/UNCOMMITTED to avoid most dirty reads. Unless you are pulling a LOT of data, deadlocks may indicate tuning issues.

    What's the nature of the query/data? Is this reporting? Could data be read from a reporting/warehouse copy of the database?

    Are you sure it's OK to deadlock other transactions if you set this procedure to use DEADLOCK_PRIORITY HIGH?

  • Please, no, don't start throwing hints all over it. Instead, focus on getting the execution plan and understanding what it is telling about how the query is behaving and how the optimizer is interpreting it. The two books linked in my signature (one free, one not) will help a ton. This is classic query tuning and simply throwing a single hint or switch or setting at it simply will not fix the problem. It's likely the code or the structures or both. Further, deadlocks, traditionally, there are exceptions, are simply caused by poor coding practices. You're accessing ResourceA then ResourceB in this query, but ResourceB then ResourceA in that query over there. Deadlock, one gets picked as a victim.

    Also, it's possible that it's other queries. As you say, this one runs fast most of the time. But, if it gets blocked by another query, then it's going to run slow. So you'll also want to look at the blocking chain. Maybe even set up a blocking alert so you can see when long term blocking is happening. How do you monitor performance on your instances?

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • SQL Guy 1 wrote:

    It has one big select statement that besides tables also joins one view and one table-valued function. It does not have explicit transaction.

    But what I can do about blockings? To have WITH(NOLOCK) hint against each table or SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED on a S.P. level? Each option is not perfect as it will read from dirty pages. Any other ideas?

    Divide'n'Conquer.  "Set-based" <> "All in one query".

    Find the "Minimal Core Data" first and expand from there.  The notion of storing that Minimal Core Data in a Temp Table can work wonders.  It might even give you a couple of index hints instead of throwing up its hands based on your current "monster". 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I am going to add this statement at the top of the SP, it will prevent or minimize deadlocks:

    SET DEADLOCK_PRIORITY HIGH

    No, it won't.  Instead, it will force SQL to kill a different rather than this proc if the proc gets into a deadlock situation (assuming the other tasks have a lower priority, which is standard by default).

    However, the:

    is_read_committed_snapshot_on = 1

    will indeed help prevent deadlocks unless both (all) deadlocked tasks are doing some type of modification (DELETE, INSERT, UPDATE).

    As others have noted, otherwise we'd need to see the query plan -- and the full DDL for all tables -- to provide more details on what is happening here.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • SQL Guy 1 wrote:

    We have a stored procedure that frequently getting blocked in production and sometimes is falling as a victim of deadlock. In normal cases, it executes usually in around 1 sec, and the engine does not report any recommended indexes.

    It has one big select statement that besides tables also joins one view and one table-valued function. It does not have explicit transaction.

    The database has these settings in production:

    snapshot_isolation_level = 0 is_read_committed_snapshot_on = 1

    I am going to add this statement at the top of the SP, it will prevent or minimize deadlocks:

    SET DEADLOCK_PRIORITY HIGH

    But what I can do about blockings? To have WITH(NOLOCK) hint against each table or SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED on a S.P. level? Each option is not perfect as it will read from dirty pages. Any other ideas?

    Thanks in advance.

    Quick thought, it would be helpful if you post the actual execution plan, then there will be less guesswork;)

    😎

     

  • Thanks all for your replies.

    Some of you are asking for full DDL and/or execution plan. Unfortunately, I cannot do that. At our place we have very strict rules about security of data, structure, etc. Changing DDL and exec. plan to abstract Table1, column1, etc, won't help much.

    This proc executes in 1 sec., and I believe the performance per se is not an issue. Exec plan does not recommend any indexes, all arrows are thin, and the plan is not large.

    What I believe that it somehow clashes with other processes, unfortunately production monitoring does not provide the info about that processes, and I want to protect this proc from others.

  • SQL Guy 1 wrote:

    Thanks all for your replies.

    Some of you are asking for full DDL and/or execution plan. Unfortunately, I cannot do that. At our place we have very strict rules about security of data, structure, etc. Changing DDL and exec. plan to abstract Table1, column1, etc, won't help much.

    This proc executes in 1 sec., and I believe the performance per se is not an issue. Exec plan does not recommend any indexes, all arrows are thin, and the plan is not large.

    What I believe that it somehow clashes with other processes, unfortunately production monitoring does not provide the info about that processes, and I want to protect this proc from others.

    Totally understood on the limitations.

    I just went through a similar thing with a high usage reporting stored procedure.  "In the wild", it was only taking 681 ms to execute and yet it was getting blocked.  After making sure the proc didn't contain Scalar Functions or mTVFs, I SET STATISTICS TIME,IO on and found two "work tables" that had a comparatively high number of reads that added up to "only" 1.6GB of IO.  Those turned out to be caused by "too much" data coming from a few tables.  Some of it was getting spooled but it was coming from a couple of non-clustered index scans and one clustered index scan.

    The optimizer gave no index hints because indexes were already being used in what the optimizer consider to be "good enough".  They weren't, though.

    I isolated the parts of the query (it was a single query reporting proc) and teased apart the code to come up with what I thought needed to be key columns and INCLUDEs for indexes and compared them to what was there for existing indexes.  None of the existing indexes could produce anything but a scan.  So I added the new indexes with NO code changes.

    The code now runs in 281 ms but that's not the WIN... The WIN was that I no longer had index scans, the number of reads for the "working tables" plummeted to ZERO,  and the reads on the indexes also plummeted.  The really good part is that the methodology being used in that one proc is also used in several other procs and their resource usage plummeted, as well, and, like the proc I worked on. their durations were also cut in half.

    Of course, all of this was also verified in "Before'n'After" Actual Execution Plans.

    Just because SQL Server offers no index hints, it doesn't mean it evaluating the indexes and the resource usages of the proc won't help.  The proc no longer experiences blocking because it no longer has to wait on full index scans and neither to many of the other procs.  It was a double-edged sword... it not only fixed the proc that was being blocked but it also made it much less likely to be blocked by other things because, they too, were no longer doing scans.

    Your proc is probably a bit more complicated and will take you longer to analyze, but it'll be worth it.  "Performance is in the code and the related indexes... or not". 😀

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply