Qusestion on delete job

  • Hello,

    I have a SQL job that deletes 10,000 records in batches of 1000 rows for every 4 minutes.

    There is another job that updates the record for the same table in every 10 minutes. We are noticing dead locks, blocks from the date we implemented.

    Question is.

    The table has got a clustered index. Will it impact to have cluster index and delete that frequently.

    We managed to schedule it that 4 minutes job starts at odd minutes and the 10 minutes job starts at even minutes.

    Chances of intersecting is very rare.

    We still see a lot of page locks.

    Should we remove the locking pages from the indexes option.

    Should we remove the clustered index for the table with frequent delete.

    Should we keep delay of 1 or 2 seconds in the batches of delete.

    Please shed some light on this subject.

    Thank you

    Babu

  • Blocking is a natural consequence of inserting and deleting data and can't be avoided. In order for the ACID properties of the transaction to be maintained, SQL Server must place locks on the tables while it's doing inserts/updates/deletes. If you're also seeing lots of deadlocks, it sounds like you might be accessing the tables in different orders within the two sets of queries. You'll need to modify that in order to help avoid deadlocks.

    I have yet to see a performance problem involving blocking and deadlocks be resolved by removing the clustered index. You may have the clustered index in the wrong place, or your queries might not be referencing it correctly to take advantage of it, but removing it is unlikely to resolve your issue. You need to look to the queries themselves to understand if they are operating in an optimal fashion. Have you examined the execution plans? Are they running as fast as possible?

    I would focus on getting the access order the same between the two sets of queries and on tuning the queries to ensure they run faster.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • baabhu (10/19/2012)


    We managed to schedule it that 4 minutes job starts at odd minutes and the 10 minutes job starts at even minutes.Chances of intersecting is very rare.

    NOt exactly. think.. if first job will take more then 1 minute so run will get conflict.So in this way you cant restrict or manage the job run

    instead. combine the jobs as Step1 and step 2 and also look into the invloved sql script to tune them as Grant mentioned above.

    -------Bhuvnesh----------
    I work only to learn Sql Server...though my company pays me for getting their stuff done;-)

  • baabhu (10/19/2012)


    Hello,

    I have a SQL job that deletes 10,000 records in batches of 1000 rows for every 4 minutes.

    There is another job that updates the record for the same table in every 10 minutes. We are noticing dead locks, blocks from the date we implemented.

    Question is.

    The table has got a clustered index. Will it impact to have cluster index and delete that frequently.

    We managed to schedule it that 4 minutes job starts at odd minutes and the 10 minutes job starts at even minutes.

    Chances of intersecting is very rare.

    We still see a lot of page locks.

    Should we remove the locking pages from the indexes option.

    Should we remove the clustered index for the table with frequent delete.

    Should we keep delay of 1 or 2 seconds in the batches of delete.

    Please shed some light on this subject.

    Thank you

    Babu

    Without knowing table and data specifics, going to make some observations that may or may not hold any water, your mileage may vary:

    1) Removing the clustered index will decrease performance. The DELETE query should be using this to remove the proper records, and eliminating it will lengthen the time the DELETE query takes, excerbating the problem. Make sure the DELETE query is optimized!

    2) The datafile(s) of which there should be several should be spread across multiple drives. Bottlenecking IO is a bad thing. If your database is in one datafile, consider moving it to multiple data files on multiple drives. This generally improves performance and in many cases reduce the locking time, since you arent waiting on IO. Obviously too many files can be bad as well. Its a balance.

    3) Partition the table. This may provide some relief if the DELETE and the INSERT/UPDATE are mutually exclusive with regards to the partition.

  • I have an odd question. Why do you need to delete 10000 records from your database every 4 minutes?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Grant Fritchey (10/19/2012)


    Blocking is a natural consequence of inserting and deleting data and can't be avoided. In order for the ACID properties of the transaction to be maintained, SQL Server must place locks on the tables while it's doing inserts/updates/deletes. If you're also seeing lots of deadlocks, it sounds like you might be accessing the tables in different orders within the two sets of queries. You'll need to modify that in order to help avoid deadlocks.

    I have yet to see a performance problem involving blocking and deadlocks be resolved by removing the clustered index. You may have the clustered index in the wrong place, or your queries might not be referencing it correctly to take advantage of it, but removing it is unlikely to resolve your issue. You need to look to the queries themselves to understand if they are operating in an optimal fashion. Have you examined the execution plans? Are they running as fast as possible?

    I would focus on getting the access order the same between the two sets of queries and on tuning the queries to ensure they run faster.

    Hi Grant Fritchey

    Thanks for your reply. Later we identified that insert statement from application is blocking the delete statement. And blocking threshold was set as 1 seconds. We changed to 3 seconds.

    Also we changed the delete to smaller blocks with delay of 0.05 seconds between batches.

    Somewhat we don't see dead blocks and blocking afterwards. Still under observation.

    Thank you.

  • Bhuvnesh (10/19/2012)


    baabhu (10/19/2012)


    We managed to schedule it that 4 minutes job starts at odd minutes and the 10 minutes job starts at even minutes.Chances of intersecting is very rare.

    NOt exactly. think.. if first job will take more then 1 minute so run will get conflict.So in this way you cant restrict or manage the job run

    instead. combine the jobs as Step1 and step 2 and also look into the invloved sql script to tune them as Grant mentioned above.

    Hi Bhuvnesh,

    Thanks for your explanation. We changed our code to check the status of the jobs. If one job is executing the other job will not start. When jobs intersect as you said, it will exit and wait for the next turn.

  • Brandie Tarvin (10/19/2012)


    I have an odd question. Why do you need to delete 10000 records from your database every 4 minutes?

    This is an OLTP system with tokens needs to be updated every 4 minutes. We use staging table to do updates and move to the final table and remove the rows from staging.

  • baabhu (10/20/2012)


    Brandie Tarvin (10/19/2012)


    I have an odd question. Why do you need to delete 10000 records from your database every 4 minutes?

    This is an OLTP system with tokens needs to be updated every 4 minutes. We use staging table to do updates and move to the final table and remove the rows from staging.

    Why not use TRUNCATE?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin (10/22/2012)


    baabhu (10/20/2012)


    Brandie Tarvin (10/19/2012)


    I have an odd question. Why do you need to delete 10000 records from your database every 4 minutes?

    This is an OLTP system with tokens needs to be updated every 4 minutes. We use staging table to do updates and move to the final table and remove the rows from staging.

    Why not use TRUNCATE?

    By the time job starts deleting, we have a delay of 0.5 seconds inside the job between batches for the new tokens to get inserted into the table. The token insertion will be happening from different process than the token delete job. That is the reason we have no used Truncate.

  • If you could combine the jobs into one, TRUNCATE actually uses less restrictive locks than DELETE does. It also only logs pages instead of individual rows, which enables it to process faster.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply