SQL Deadlock

  • 6pravin9@gmail.com

    SSC Enthusiast

    Points: 191

    I have Simple SP to Insert Single Record from Application.

    When I executes it manually Response time is within 1 Sec.

    But When we try to Test with multiple users Same Time it's Getting Blocked/Deadlock it Self.(we try for 10 users only)

    Table Has around 1 billions of rows without partitioning.

    and it also have some Indexes

    It's Working Fine with Applock and tablock but responce time is around 30 sec. but My requirement response time is less 2 Sec.

    OKTransaction (Process ID 178) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

    Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.

    Attachments:
    You must be logged in to view attached files.
  • sgmunson

    SSC Guru

    Points: 110551

    Well. with a billion rows, an insert might have to also compete with other processes that may hold locks on the table.   It's not really a "simple insert" when the target table has a billion rows.   That table is rather large.  Why do you have a billion rows?   What does each row in this table represent?   You're going to need to know what process is causing you to deadlock... having to try and read from a billion row table could easily cause resource issues... and you might well be short on RAM, slowing down the read process enough to cause deadlocks...   but ... it could as easily be an early sign of a hardware problem with the disk..   Or you could be running out of disk space and finding more is rather challenging, so there's a lot of "churn", that slows everything down.   You'll need to talk to your DBA and find out what's going on...

    Steve?(aka sgmunson)?:) 🙂 :)?
    Health & Nutrition
    Make Guaranteed Income

  • 6pravin9@gmail.com

    SSC Enthusiast

    Points: 191

    We have 256 GB RAM. We archive 50 % of Rows. Still Same Issues.We have SSD disk. So I don;t think any hardware issues.

     

  • Sergiy

    SSC Guru

    Points: 109859

    The table is over-indexed in multiple desperate attempts to fix performance issues caused by lousy design and bad choice of the clustered index.

    When inserting a new row you add records to leaf levels of each of non-clustered indexes, causing page splits, index re-scans.

    Page splits cause lock escalation from initial row level to page and up to table lock.  While this escalation is happening it might collide with another escalation on a parallel process - that's where you get a deadlock.

    Applying table-level lock straight away removes the possibility of such collision.

    Optimising the index structure should reduce the response time.

    Actually, half of the columns should not be in this table at all.

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply