• I assume you're implementing some kind of work queue table. As you reach high concurrency levels, you may experience blocks because the shared lock on read can block the update lock. The other problem you may experience is redundant reads that don't necessarily cause a block.

    I implemented a work queue table to get the 'next' job on a server with 48 cores. Even though the work queue table rarely had more than 1k rows, I would occasionally experience blocks. The other problem I ran into was redundant updates, in which multiple threads would receive the same record. They would play nice and not block each other, but it was adding additional overhead and updating some records multiple times. In one of my concurrency tests, I saw 460 updates to 300 rows (one of the rows was updated six times).

    My strategy for a high concurrency work table:

    1. Keep the work table as focused as possible. No joins, no reporting, only the columns needed to schedule the job. It should be entirely focused on providing the next job. Everything else should be moved to other tables.

    2. Separate the inventory of work from the active work. Work that is not a candidate for the NEXT job should not be in the work table. That would include work that has already been finished. Every additional record will add some overhead.

    3. Use only inserts and deletes for the active work table. Don't bother to update the record, just get rid of it. Use another table to capture the start time.

    An example of my solution:

    tActiveWork

    - Col1 as BatchID

    - Col2 as WorkSequence

    - WorkID

    tWorkList

    - BatchID

    - WorkSequence

    - WorkID (Clustered Index Key)

    - StartedAt

    - FinishedAt

    - Result

    1. Schedule the batch of work

    INSERT INTO tActiveWork()

    SELECT BatchID, WorkSequence, WorkID

    FROM tWorkList

    WHERE BatchID = @BatchID

    2. Determine the NEXT work

    WITH CTE AS (

    SELECT top 1 *

    FROM tActiveWork WITH(READPAST)

    WHERE BatchID = @BatchID

    ORDER BY WorkSequence

    )

    DELETE FROM CTE

    OUTPUT WorkID INTO @Something

    3. Mark the NEXT work as started

    UPDATE tWorkList SET StartedAt = GETDATE()WHERE WorkID = @WorkID;

    ---

    #1 My work table is a heap because it rarely has more than 1k rows. Inserts into a heap won't cause locking because existing rows are not affected and there is no index to maintain.

    #2 Use Readpast to skip already locked rows. By using the output from the delete, you're guaranted the record will not get selected twice. Only the thread that deleted it will output the ID.

    #3 The update to the WorkList table can use the PK for a laser strike update of a single row.

    Also, investigate using an in-memory work table if you have the option. That will likely improve performance even more and eliminate some of the concerns entirely.

    I used this general model for my work table and it has been going strong now for over a year without a problem, running thousands of jobs each day.

    Good luck.

    Wes
    (A solid design is always preferable to a creative workaround)