Help in fixing LCK_M_IX issue

  • I have a daily job which deletes several thousand records from a large table between 3AM-4AM.Data is inserted in this table throughout day at a very high frequency. To be precise, around half million records get inserted in 1 hour. Now beween 3Am-4AM, the wait stats go very high because LCK_M_IX lock caused because of delete job. Is there anything that can be done to get away from these locks?Will Change in isolation level help?

    thx

    Online Trainer For SQL DBA and Developer @RedBushTechnologies with 18 yrs exp.

  • If you are doing mass inserts and deletes to the same table, there is no magic bullet to speed that up. DML statements require more overhead by default to maintain index structures and check foreign keys et al, and in order to ensure integrity of the database locks are needed. The only solution to your problem is not to do those at the same time. If you do, you are working against your database and how it was designed to work. I would shut one of the two off and not run them together.

  • Thanks for responding Jeff. Unfortunately I can't stop the inserts any time as it is live table. Inserts happen all 24 hours. But we have to archive data as well daily. So we have to run delete job once in a day.Now I am wondering if this is not a very common practical scenario in production environments. How it gets managed then.

    jeff.mason (5/5/2016)


    If you are doing mass inserts and deletes to the same table, there is no magic bullet to speed that up. DML statements require more overhead by default to maintain index structures and check foreign keys et al, and in order to ensure integrity of the database locks are needed. The only solution to your problem is not to do those at the same time. If you do, you are working against your database and how it was designed to work. I would shut one of the two off and not run them together.

    Online Trainer For SQL DBA and Developer @RedBushTechnologies with 18 yrs exp.

  • Other than high values of something that doesn't mean too much alone, is there a problem here?

    Are the inserts getting adversely affected by the deletes? Is the delete running too slowly?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • This is how I do deletes when I don't want to impact production and I don't have a window dedicated to the deletes:

    1) create a small table in the database with an autoincrement primary key and another INT field.

    2) insert into this table in the second INT field the primary key values of what you want to delete

    3) set up a job to run T-SQL

    4) the T-SQL for this job will set up a loop for the contents of the primary keys to delete.

    5) loop through the list one at a time and delete one row at a time

    If time is not absolutely critical (this will make the delete process take a little longer in an absolute sense, though probably not if you have blocking today), this will reduce your impact to the system while achieving your goals.

    It looks like this:

    CREATE TABLE RowsToDelete (

    PK INT IDENTITY(1,1),

    RID INT)

    INSERT RowsToDelete (RID)

    SELECT yourPK from yourtable WHERE yourcriteriafordeletion

    DECLARE @Counter INT

    DECLARE @ID INT

    DECLARE @max-2 INT

    SELECT @Counter = 1

    SELECT @max-2 = COUNT(*) FROM RowsToDelete

    WHILE @Counter <= @max-2

    BEGIN

    SELECT @ID = RID FROM RowsToDelete WHERE PK = @Counter

    DELETE FROM yourtable WHERE yourPK = @ID

    SELECT @Counter = @Counter + 1

    END

  • I don't delete one row at a time, but if there's a where clause that defines which rows to delete, it's pretty easy to delete batches in a while loop, using the TOP on the delete.

    pseudocode:

    While !Done

    delete top(1000) from some table where some condition

    if rowcount = 0 then done

    loop

    That keeps both the locking down and the tran log growth down.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I came up with the one-at-a-time approach when I was dealing with a third party app that mandated dozens of indexes and had 300 GB of data in it; even batching it by 1000 rows was causing contention and tran log issues. Once it was established that the delete process could run silently in the background during the day with no impact to performance, it became standard for us. This is just for archiving where a table is large and cannot be partitioned. But as you say, batching it causes tran log and locking issues to evaporate.

  • First thing to do is make sure you have an index on the key field(s) being used to delete from the big table. If not you are totally hosed by the table scan you will get for the deletes.

    Second is to use EXPLICIT TRANSACTIONs for the deletes. Be sure to check for error and then commit or rollback each one.

    Third is to put a WAITFOR DELAY '00:00:0.1' or something in the loop to give the server some breathing room for the INSERTs.

    I have done this countless times at clients over the years. Works like a charm.

    Note that IF you have that index then you could batch things up like Gail said. Even if you can only manage 50 rows at a time that is a HUGE improvement over log-buffer-flush-inducing one-at-a-time method. It really will help. On good systems I often can manage 10K-100K batches without undue locking or tlog issues.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Wise words Gail! Actually the frequency of inserts is so high, that even the extra wait of few ms starts getting reflected in a third party tool we are using for health monitoring. The deletes are happening absolutely fine as PK is used in where clause and I am running it in batches as you mentioned. The deletes finish in 8 min (I am using waitfor delay for 1 sec as well, thanks Kevin). In these 8 min, the total waits for Inserts become around 220 minutes. And the monitoring software starts raising alarms. But actually these 220 minutes are caused by some 25000 inserts in those 8 min. So not really very slow and something to concern about. So I think finally I should be leaving with those red colors in tool without much concern.

    thanks all

    GilaMonster (5/5/2016)


    Other than high values of something that doesn't mean too much alone, is there a problem here?

    Are the inserts getting adversely affected by the deletes? Is the delete running too slowly?

    Online Trainer For SQL DBA and Developer @RedBushTechnologies with 18 yrs exp.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply