I have a SQL job that deletes 10,000 records in batches of 1000 rows for every 4 minutes.
There is another job that updates the record for the same table in every 10 minutes. We are noticing dead locks, blocks from the date we implemented.
The table has got a clustered index. Will it impact to have cluster index and delete that frequently.
We managed to schedule it that 4 minutes job starts at odd minutes and the 10 minutes job starts at even minutes.
Chances of intersecting is very rare.
We still see a lot of page locks.
Should we remove the locking pages from the indexes option.
Should we remove the clustered index for the table with frequent delete.
Should we keep delay of 1 or 2 seconds in the batches of delete.
Please shed some light on this subject.
Without knowing table and data specifics, going to make some observations that may or may not hold any water, your mileage may vary:
1) Removing the clustered index will decrease performance. The DELETE query should be using this to remove the proper records, and eliminating it will lengthen the time the DELETE query takes, excerbating the problem. Make sure the DELETE query is optimized!
2) The datafile(s) of which there should be several should be spread across multiple drives. Bottlenecking IO is a bad thing. If your database is in one datafile, consider moving it to multiple data files on multiple drives. This generally improves performance and in many cases reduce the locking time, since you arent waiting on IO. Obviously too many files can be bad as well. Its a balance.
3) Partition the table. This may provide some relief if the DELETE and the INSERT/UPDATE are mutually exclusive with regards to the partition.