Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Qusestion on delete job Expand / Collapse
Author
Message
Posted Friday, October 19, 2012 2:08 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: 2 days ago @ 10:01 PM
Points: 1,287, Visits: 1,122
Hello,

I have a SQL job that deletes 10,000 records in batches of 1000 rows for every 4 minutes.

There is another job that updates the record for the same table in every 10 minutes. We are noticing dead locks, blocks from the date we implemented.

Question is.

The table has got a clustered index. Will it impact to have cluster index and delete that frequently.

We managed to schedule it that 4 minutes job starts at odd minutes and the 10 minutes job starts at even minutes.

Chances of intersecting is very rare.

We still see a lot of page locks.

Should we remove the locking pages from the indexes option.

Should we remove the clustered index for the table with frequent delete.

Should we keep delay of 1 or 2 seconds in the batches of delete.

Please shed some light on this subject.

Thank you
Babu



Post #1374675
Posted Friday, October 19, 2012 3:43 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 6:38 AM
Points: 13,755, Visits: 28,147
Blocking is a natural consequence of inserting and deleting data and can't be avoided. In order for the ACID properties of the transaction to be maintained, SQL Server must place locks on the tables while it's doing inserts/updates/deletes. If you're also seeing lots of deadlocks, it sounds like you might be accessing the tables in different orders within the two sets of queries. You'll need to modify that in order to help avoid deadlocks.

I have yet to see a performance problem involving blocking and deadlocks be resolved by removing the clustered index. You may have the clustered index in the wrong place, or your queries might not be referencing it correctly to take advantage of it, but removing it is unlikely to resolve your issue. You need to look to the queries themselves to understand if they are operating in an optimal fashion. Have you examined the execution plans? Are they running as fast as possible?

I would focus on getting the access order the same between the two sets of queries and on tuning the queries to ensure they run faster.


----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #1374708
Posted Friday, October 19, 2012 4:14 AM


SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Tuesday, August 19, 2014 12:26 AM
Points: 2,840, Visits: 3,963
baabhu (10/19/2012)

We managed to schedule it that 4 minutes job starts at odd minutes and the 10 minutes job starts at even minutes.Chances of intersecting is very rare.
NOt exactly. think.. if first job will take more then 1 minute so run will get conflict.So in this way you cant restrict or manage the job run
instead. combine the jobs as Step1 and step 2 and also look into the invloved sql script to tune them as Grant mentioned above.


-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done
Post #1374716
Posted Friday, October 19, 2012 10:33 AM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Friday, August 16, 2013 8:28 AM
Points: 249, Visits: 460
baabhu (10/19/2012)
Hello,

I have a SQL job that deletes 10,000 records in batches of 1000 rows for every 4 minutes.

There is another job that updates the record for the same table in every 10 minutes. We are noticing dead locks, blocks from the date we implemented.

Question is.

The table has got a clustered index. Will it impact to have cluster index and delete that frequently.

We managed to schedule it that 4 minutes job starts at odd minutes and the 10 minutes job starts at even minutes.

Chances of intersecting is very rare.

We still see a lot of page locks.

Should we remove the locking pages from the indexes option.

Should we remove the clustered index for the table with frequent delete.

Should we keep delay of 1 or 2 seconds in the batches of delete.

Please shed some light on this subject.

Thank you
Babu



Without knowing table and data specifics, going to make some observations that may or may not hold any water, your mileage may vary:
1) Removing the clustered index will decrease performance. The DELETE query should be using this to remove the proper records, and eliminating it will lengthen the time the DELETE query takes, excerbating the problem. Make sure the DELETE query is optimized!
2) The datafile(s) of which there should be several should be spread across multiple drives. Bottlenecking IO is a bad thing. If your database is in one datafile, consider moving it to multiple data files on multiple drives. This generally improves performance and in many cases reduce the locking time, since you arent waiting on IO. Obviously too many files can be bad as well. Its a balance.
3) Partition the table. This may provide some relief if the DELETE and the INSERT/UPDATE are mutually exclusive with regards to the partition.
Post #1374924
Posted Friday, October 19, 2012 10:41 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 10:45 AM
Points: 5,572, Visits: 6,356
I have an odd question. Why do you need to delete 10000 records from your database every 4 minutes?

Brandie Tarvin, MCITP Database Administrator

Webpage: http://www.BrandieTarvin.net
LiveJournal Blog: http://brandietarvin.livejournal.com/
On LinkedIn!, Google+, and Twitter.

Freelance Writer: Shadowrun
Latchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.
Post #1374931
Posted Saturday, October 20, 2012 8:34 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: 2 days ago @ 10:01 PM
Points: 1,287, Visits: 1,122
Grant Fritchey (10/19/2012)
Blocking is a natural consequence of inserting and deleting data and can't be avoided. In order for the ACID properties of the transaction to be maintained, SQL Server must place locks on the tables while it's doing inserts/updates/deletes. If you're also seeing lots of deadlocks, it sounds like you might be accessing the tables in different orders within the two sets of queries. You'll need to modify that in order to help avoid deadlocks.

I have yet to see a performance problem involving blocking and deadlocks be resolved by removing the clustered index. You may have the clustered index in the wrong place, or your queries might not be referencing it correctly to take advantage of it, but removing it is unlikely to resolve your issue. You need to look to the queries themselves to understand if they are operating in an optimal fashion. Have you examined the execution plans? Are they running as fast as possible?

I would focus on getting the access order the same between the two sets of queries and on tuning the queries to ensure they run faster.


Hi Grant Fritchey

Thanks for your reply. Later we identified that insert statement from application is blocking the delete statement. And blocking threshold was set as 1 seconds. We changed to 3 seconds.

Also we changed the delete to smaller blocks with delay of 0.05 seconds between batches.

Somewhat we don't see dead blocks and blocking afterwards. Still under observation.

Thank you.
Post #1375106
Posted Saturday, October 20, 2012 8:38 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: 2 days ago @ 10:01 PM
Points: 1,287, Visits: 1,122
Bhuvnesh (10/19/2012)
baabhu (10/19/2012)

We managed to schedule it that 4 minutes job starts at odd minutes and the 10 minutes job starts at even minutes.Chances of intersecting is very rare.
NOt exactly. think.. if first job will take more then 1 minute so run will get conflict.So in this way you cant restrict or manage the job run
instead. combine the jobs as Step1 and step 2 and also look into the invloved sql script to tune them as Grant mentioned above.


Hi Bhuvnesh,

Thanks for your explanation. We changed our code to check the status of the jobs. If one job is executing the other job will not start. When jobs intersect as you said, it will exit and wait for the next turn.

Post #1375107
Posted Saturday, October 20, 2012 11:07 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: 2 days ago @ 10:01 PM
Points: 1,287, Visits: 1,122
Brandie Tarvin (10/19/2012)
I have an odd question. Why do you need to delete 10000 records from your database every 4 minutes?



This is an OLTP system with tokens needs to be updated every 4 minutes. We use staging table to do updates and move to the final table and remove the rows from staging.
Post #1375114
Posted Monday, October 22, 2012 4:31 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 10:45 AM
Points: 5,572, Visits: 6,356
baabhu (10/20/2012)
Brandie Tarvin (10/19/2012)
I have an odd question. Why do you need to delete 10000 records from your database every 4 minutes?


This is an OLTP system with tokens needs to be updated every 4 minutes. We use staging table to do updates and move to the final table and remove the rows from staging.


Why not use TRUNCATE?


Brandie Tarvin, MCITP Database Administrator

Webpage: http://www.BrandieTarvin.net
LiveJournal Blog: http://brandietarvin.livejournal.com/
On LinkedIn!, Google+, and Twitter.

Freelance Writer: Shadowrun
Latchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.
Post #1375325
Posted Monday, October 22, 2012 7:14 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: 2 days ago @ 10:01 PM
Points: 1,287, Visits: 1,122
Brandie Tarvin (10/22/2012)
baabhu (10/20/2012)
Brandie Tarvin (10/19/2012)
I have an odd question. Why do you need to delete 10000 records from your database every 4 minutes?


This is an OLTP system with tokens needs to be updated every 4 minutes. We use staging table to do updates and move to the final table and remove the rows from staging.


Why not use TRUNCATE?


By the time job starts deleting, we have a delay of 0.5 seconds inside the job between batches for the new tokens to get inserted into the table. The token insertion will be happening from different process than the token delete job. That is the reason we have no used Truncate.
Post #1375410
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse