Question about Locking

  • Hi

    I've received Deadlock messages from a timed job. I've now set up SQL profile to trap these Deadlocks. There have been no deadlocks since trace/profiler is on.

    The job moves records older than a couple of months from tblLogs to a_tblLogs in another DB. This is happening while records are being inserted into tblLogs.

    When this Archive is running, I can see Waits for pagelocks on tblLogs for current inserts.

    I thought this would NOT happen because all the data pages being archived (old ones) are not being touched by the current inserts. Sp_lock reports locks by archive process on tblLogs. Logically I see this as deleting from bottom of the list adding to top of the list the 2 should happen at the same time.

    Or could this be locking on the non clustered index.

    Checking the data I saw that 99% of the data comes in logdate order, so the higher the value of recID the later LogDate is. However some old records (1%) get inserted now so have old logdates but large recID values.

    Am I correct: as these currently inserted old records are on datapages that new records are trying to add to, this could be the cause of the contention?[/i]

    CREATE TABLE dbo.[tblLogs](

    [recID] [bigint] IDENTITY(1,1) NOT NULL,

    [customid] [int] NOT NULL,

    [LogDate] [datetime] NOT NULL,

    [LogEntry] [varchar](255) NOT NULL,

    CONSTRAINT [PK_tblLogs] PRIMARY KEY CLUSTERED ([recID] ASC)

    )

    +Other Index = non-clustered customid,LogDate

    CREATE TABLE ArchiveDB.dbo.[a_tblLogs](

    [recID] [bigint] NOT NULL,

    [customid] [int] NOT NULL,

    [LogDate] [datetime] NOT NULL,

    [LogEntry] [varchar](255) NOT NULL,

    archiveid int not null

    CONSTRAINT [PK_a_tblLogs] PRIMARY KEY CLUSTERED ([recID] ASC)

    )

    Arhive process In batches of 50000

    loop while still have records

    begin tran

    Insert into otherDB..archive

    select top 50000 * from tblLogs WHERE logdate < DATEADD(mm, -2,getdate())

    order by recID ASC

    delete from tblLogs

    FROM a_tblLogs as archive

    WHERE tblLogs.recID = archive.recID

    commit tran

  • Deadlocks are different from locking\blocking in SQL.

    Deadloacks: 2 processes trying to obtain an incompatible lock on the same resource at the same time. The process which is the quickest to rollback will always be chosen as the deadlock victim.

    Locks: Inserting\updating\deleting from\into a table will require an exclusive lock which is incompatible with a shared lock (used by select).

    While you are selecting data to be copied to the archive db. No Inserting\updating\deleting can occur.

    The method I have used to resolve deadlock victims is to use the SET DEADLOCK_PRIORITY to low for the SPROC which is selected as the deadlock victim, when the traceflag i think off the top of my head 3205/04 is fired then It should retry in x amount of mintues\secs. Min\secs should not be static. I imagine it is the insert statement that gets the deadlock message. Google SET DEADLOCK_PRIORITY. It will then fire and wait its turn as a block maybe waiting for the archiving to finish.

  • Sorry the traceflag is 1204/05

  • You need to capture the deadlock and provide the output to establish the source of the problem.

    It's could be caused by the delete in this case but could be any number of different causes from running SQL jobs to applications and reports.

  • Deadlocking that occurs among processes that don't seem to involve the same rows or pages is often caused by attempts at lock escalation. When a process takes an exclusive (X) lock on a resource, it also takes an "intent exclusive" (IX) on the "next highest" resource in case escalation becomes necessary. When two processes have IX locks on a resource and they each attempt to escalate to X locks, a deadlock occurs.

    For example, process 1 takes an X lock on some rows in Table A; at the same time, it will take an IX lock on Table A. Process 2 takes an X lock on some other rows in Table A and also takes an IX lock on Table A. The IX locks on Table A are compatible - they can exist at the same time.

    Then, process 1 hits an escalation threshold and attempts to convert its IX lock on Table A to an X lock. The requested X lock is incompatible with process B's IX lock on Table A, so process 1 holds its IX lock and waits for the X lock. In the meantime, process 2 hits an escalation threshold and attempts to convert its IX lock on Table A to an X lock. Again, the requested X lock is incompatible with process 1's IX lock, so process 2 holds its IX lock and waits for the X lock.

    Now the processes are deadlocked - each is holding a lock that is blocking the other but neither can release its lock until the other does. SQL Server detects this deadlock and selects one process as the "deadlock victim" and rolls it back.

    As you can see, this can happen even if the two processes are not actually updating the same rows. Lock escalation occurs at a fairly low threshold - when a process acquires 5,000 exclusive locks (row or page locks) on a single unpartitioned table, escalation is triggered and the process will attempt to convert its intent exclusive lock on the table to an exclusive lock. If your processes are inserting, updating, or deleting more than 5,000 rows at a time, it's likely that they trying to convert IX locks on the table to X locks and deadlocking with each other occasionally.

    If it's important that one process always take precedence over another when deadlocking occurs, you can use the SET DEADLOCK_PRIORITY statement within the code to give that process a higher priority the other (or vice versa - give a process that should always be the deadlock victim a low priority).

    Resources from Books Online

    Lock Escalation

    Lock Compatibility

    SET DEADLOCK_PRIORITY

    Jason Wolfkill

Viewing 5 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply