isolation level solution to unique index violation

  • Inherited some code ... and an issue....

    table: SessionData (sessionid varchar(50), somedata text)

    unique index on sessionid

    pseudo code proc: SessionUpdate

    (

    @sessionid varchar(50), @somedata text

    )

    declare @sessionexists bit

    set @sessionexists = 0

    update sessiondata

    set

    somedata = @somedata,

    @sessionexists = 1

    where

    sessionid = @sessionid

    insert into sessiondata (sessionid, somedata)

    select @sessionid, @somedata

    where

    @sessionexists = 0

    So essentially the proc updates an exisiting session data record OR adds a new one, using the local variable @sessionexists to control the logic - an UPSERT procedure as it's known.

    All was going well in testing, and then it went live...

    Error: 2601, Severity: 14, State: 1

    Cannot insert duplicate key row in object 'dbo.SessionData' with unique index 'idx_sessiondata_sessionid'.

    The statement has been terminated.

    Using profiler it became clear that the calling application was calling the procedure in very quick succession, i.e.

    RPC Starting : exec dbo.SessionUpdate 1, 'ahugeamountofdata'

    RPC Starting : exec dbo.SessionUpdate 1, 'ahugeamountofdata'

    RPC Completed : exec dbo.SessionUpdate 1, 'ahugeamountofdata'

    Exception : Error: 2601, Severity: 14, State: 1

    User Error Message : Cannot insert duplicate key row in object 'dbo.SessionData' with unique index 'idx_sessiondata_sessionid'.

    User Error Message The statement has been terminated

    RPC Completed : exec dbo.SessionUpdate 1, 'ahugeamountofdata'

    where 'ahugeamountofdata' was actually a large chunk of data, and 'very quick succession' means no difference in the time stamps in Profiler.

    So it seemed obvious, the multi-threaded app was calling the proc to update the session data, but before the initial insert could finish, a second call had started to update the session data. Talking to the devs revealed that the app was merely reading some data out of memory between the two calls, so it's no surprise that that was much faster than the write to the database, over the network, of a huge amount of BLOB data!!

    Under normal condititons and READ COMMITTED isolation level, the second call has no record to update and hence follows the logic to perform the insert - BANG!! the unique index is violated.

    The solution proposed was to wrap the code in a transaction and use locking hints to pump the isolation level to SERIALIZABLE, i.e.

    psuedo code proc: SessionUpdate

    (

    @sessionid varchar(50), @somedata text

    )

    declare @sessionexists bit

    set @sessionexists = 0

    BEGIN TRAN

    update sessiondata WITH (SERIALIZABLE)

    set

    somedata = @somedata,

    @sessionexists = 1

    where

    sessionid = @sessionid

    insert into sessiondata WITH (SERIALIZABLE) (sessionid, somedata)

    select @sessionid, @somedata

    where

    @sessionexists = 0

    COMMIT

    My only concern now is the effect that this has on the concurrency and ultimately the performance of the app.

    Any suggestions/comments?

    Kev

  • Simply revert the order, first insert and then update:

    begin try

    insert...

    set @inserted=1;

    end try;

    begin catch

    set @inserted=0;

    end catch;

    if @inserted=0

    update...;

  • As serializing will block inserts for other sessionIds, I would be inclined to use an

    application lock.

    (If the majority of your upserts are updates, I suspect replying on a CATCH block could be slow.

    You may want to test this.)

    I would suggest you try something like:

    DECLARE @Resource nvarchar(255)

    SET @Resource = N'LockSessionId' + @sessionId

    BEGIN TRANSACTION

    &nbsp&nbsp&nbsp&nbspEXEC sp_getapplock @Resource, N'Exclusive'

    &nbsp&nbsp&nbsp&nbspUPDATE SessionData

    &nbsp&nbsp&nbsp&nbspSET someData = @someData

    &nbsp&nbsp&nbsp&nbspWHERE sessionId = @sessionId

    &nbsp&nbsp&nbsp&nbsp

    &nbsp&nbsp&nbsp&nbspIF @@ROWCOUNT() = 0

    &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbspINSERT INTO SessionData(sessionId, someData)

    &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbspVALUES(@sessionId, @someData)

    COMMIT

  • Thanks for the responses.

    Robert - not sure how this guarantees to avoid the violation.

    Ken - I hadn't thought about application locks - sounds good! I just need to do a little research - are there any pros/cons with app locks?

    I'll give both of these a try in testing and report back!

    Kev

  • It IMHO does.

    First you try the insert and if you get a violation, you catch the exception and try update.

  • Interesting.....

    Robert - it did seem to work OK, and I wasn't getting the error in SSMS in my test environment. Strangely though, the error was still raised when the code was traced using Profiler! I guess that is by default - the error still happens, it's just that the code handles it.

    Not sure how comfortable I am raising hundreds of 'hidden' errors - apart from that its a good solution!

    I need to keep testing with Ken's ideas - I'll have to compare the performance.

    Kev

  • Ken's solution should work just fine, probably even better as it's unlikely to get an exception.

    However, sometimes it's not desirable for a SP to handle transaction.

    I'm interested in performance comparison in a heavy multithreaded test.

  • Robert - App Locks can be made to apply to the current session and not just the current transaction. It is then up to you to explicitly release the lock with sp_releaseapplock.

    Kev - I would also be interested in the performance of a heavy multi-threaded test. Have you had a chance to do this yet?

  • Ken McKelvey (2/4/2009)


    Robert - App Locks can be made to apply to the current session and not just the current transaction. It is then up to you to explicitly release the lock with sp_releaseapplock.

    Kev - I would also be interested in the performance of a heavy multi-threaded test. Have you had a chance to do this yet?

    Why do you need to explicitly release the locks?

    No sorry - haven't got round to testing this yet - snowed under with a multitude of other things! As usual.

    Kev

  • You only need to explicitly release the lock if you set it as a SESSION lock otherwise the it will last for the rest of duration of the connection.

    Thanks for the feedback.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply