Mutexes in SQL

  • Comments posted to this topic are about the item Mutexes in SQL

  • Nice article, thank you!

    I have been reading up on sp_getapplock and wondering if I need to use that for a solution I'm working on. But I'm wondering why I should bother - what's the worst case scenario if I DON'T manage it myself? SQL Server (this project uses 2005) manages concurrency, right? In my case I will have a table with an ID that I need to grab and increment, so obviously it's critical not to have data inconsistency there - but can't I rely on SQL Server to do that by default? I'll have everything in one database, and there will be at most 1000 IDs requested per day.

    You said:

    Using a mutex in Transaction mode doesn't seem particularly useful, as we already have record locks that release at the end of a transaction...

    -- are you referring to what SQL Server does by default, or did you implement something else in addition to manage that?

    Part of my struggle stems from the fact that I don't know how to test that worst case scenario (which for me, would be that two people grab the same ID number somehow, and/or that it doesn't get incremented properly) to know whether I need to handle it differently. I can build unit tests, sure, but how do I know if I successfully got two processes call at the same instant and prove that it is handled properly?

    Thanks -

    Meg

  • Hiya Meg77,

    You don't need to use sp_getapplock, the built in transactions work fine for the example you mentioned.

    The following code is just one way to skin this cat...

    declare @i_uid int

    begin transaction

    SELECT @i_uid = i_nextUID

    FROM t_table with (updlock)

    update t_table set i_nextUID = i_nextUID +1

    commit transaction

    --Use @i_uid here.

    Remember that changes to variables, unlike actions to Real tables, are not rolled back, they are left in the same state as before the rollback (if the transaction fails or is deliberately rolled-back).

    A select statement which uses the update lock type reacts differently when inside a transaction than when within a normal session.

    When inside the transaction the select "with (updlock)" locks the table for updates for the entire transaction. As this uses an update lock we can then update the table in the same transaction.

    Any select statement that tries to access the same table with an update lock will simply wait until the lock get released as the transaction ends (or it may timeout if your update code take too long).

    You can try it out with the following example.

    Create the table:

    create table t_table (i_nextUID int)

    insert into t_table values (1)

    Update code:

    declare @i_uid int

    begin transaction

    SELECT @i_uid = i_nextUID

    FROM t_table with (updlock)

    waitfor delay '00:00:30'

    update t_table set i_nextUID = i_nextUID +1

    commit transaction

    print @i_uid

    Test code:

    SELECT i_nextUID, getdate()

    FROM t_table with (nolock)

    go

    SELECT i_nextUID, getdate()

    FROM t_table

    go

    SELECT i_nextUID, getdate()

    FROM t_table with (updlock)

    Create the table then open two connections, and paste the update code in one and the test code in the other.

    Run the update code and then whilst it is executing run the test code in the other window.

    You should find that the test code returns three rowsets, the first rowset should return immediately, the second may also do so (dependent on your connection lock settings), the third will always wait until the example code releases the lock when it closes the transaction. Note: the test code is not in a transaction.

    The waitfor delay is simply used to slow the query after the lock has been granted but before the update has happened, effectively forcing the situation that you want to avoid, two people reading the next number from the table.

    The three rowsets demonstrate that you can still read from the table using either Nolock, or a normal select statement, but that you can not if you use the update lock whilst an existing update lock is being held.

    For confirmation you can try replacing the test code (Not the update code) with the following:

    declare @i_uid int

    begin transaction

    SELECT @i_uid = i_nextUID

    FROM t_table with (updlock)

    update t_table set i_nextUID = i_nextUID +1

    commit transaction

    print @i_uid

    You should now have two connections with nearly identical SQL.

    Run the update code as before and then execute the new test code, you should find that even tho this new code has no waitfor delay it still waits for the update code to finish executing and both return unique numbers.

    All done without using extra stored procedures or other "locking" tables, just using the same code that you probably would have anyway, but now inside a transaction with an update lock. Just make sure that the code inside the transaction is just enough to get the job done, don't put more in than is needed otherwise you will slow down the generation of uids.

    As a final test (to prove we need to provide the table locking hint to SQL Server) remove the "with (updlock)" from both the update connection and the test connection and run both as before, you will find that the test connection returns immediately, that both return the same uid and that both increment the table missing a uid out.

    Of course an identity (autonumber) field with SCOPE_IDENTITY also works very well...

    SgtWilko

  • Thank you! This is a very interesting concept. We have been using various "tricks" with empty file getting created at the end of one process and then checking for existence of this file (in SSIS loop) at the beginning of the next process.

    I would like a bit more explanation and maybe a usage example. They way I read it, I need to do something like this (over-simplifying):

    Process 1:

    EXEC spGetAppLock @p_Resource = 'MyResource'

    -- Do some work here

    EXEC spRelAppLock @p_Resource = 'MyResource'

    Process 2:

    EXEC spGetAppLock @p_Resource = 'MyResource'

    -- Do some work here

    EXEC spRelAppLock @p_Resource = 'MyResource'

    This way if Process 1 began first, Process 2 would wait until 'MyResource' is released. Am I understanding it correctly?

    Also, how do you use it when Processes run on multiple servers? How do I access 'MyResource' on the first server from the second server?

  • I only use this kind of thing where we are synchronising process that rely one thing to complete before another can start. I wouldn't use it to enforce transactional integrity and concurrency within a database as (as you say) SQL does a much better job of this than we can!

  • @mishaluba

    That's exactly the scenario I was looking to solve (and the problem of cleaning up these artifacts we create) while I was looking for this solution. When something happens and doesn't terminate cleanly we have to resort to manual overrides and failed processes.

    Your understanding seems sound, but I would also say that just because Process 2 starts doesn't mean that Process 1 completed successfully. Only that it isn't running! You would need to check somewhere else to ensure that it had completed successfully.

    This is based on multiple clients accessing a single db server. If you wanted to synchronise between two database servers you would have to elect one of them to be the co-ordinator and access the stored procedure using linked servers.

    The scenario we use this technique for is based on a batch job (Batch A) of stored procedures that run every 30 minutes. While that is running we don't want another batch job (Batch B which runs twice a day) to start. There are 2 versions of Batch B to handle different countries and currencies and we are happy to run them side by side, so they get Shared application locks. But while Batch A is running they may not start, so Batch A takes an Exclusive lock. This also has the added benefit that if Batch B runs late, then Batch A cannot start.

    This technique is not about synchronising data within a transaction. SQL does that fine. Its about co-ordinating business processes.

  • Thank you for the quick reply.

    Expanding on multiple/linked server scenario a little bit. So if I access a stored procedures on Server 1 via linked server defined on Server 2, something like this:

    EXEC Server1.myDB.dbo.spGetAppLock @p_Resource = 'MyResource'

    I wonder if the resource will be checked on Server 1 or Server 2? What if I have 'MyResource' defined on both servers.

  • The resource should be checked in the environment where the stored procedure is executed. Server1 as that is explicitly where you have called the stored proc.

    if you have multiple db servers to co-ordinate it may be worth setting up a co-ordination instance of SQL somewhere. That could be a tiny installation on a VM and its only purpose is to be a co-ordination point.

    You could make a very clean implementation with rules about what to do when you fail to get the co-ordination required.

  • the easiest solution is to set up a central SQL server just to run jobs centrally

    i know people that set up processes like this where some job runs and then another runs later that depends on the previous job, etc. it boggles my mind why they do this

  • @sgtwilko -

    Wow, thanks, that was great, I stepped through the samples - makes perfect sense now. updlock was new to me, and very interesting.

    Lots more to learn obviously but i'm a big jump ahead of where i was yesterday!

    Meg

  • @Alen

    The problem I am trying to resolve isn't so much that Job B must wait for Job A, and is dependant on the results of Job A. That would be a simple precedence.

    We have two jobs that should not be run at the same time. Running Job B while Job A is running can lead to strange behaviour and incorrect results, so we want them to be aware of each other. Neither is particularly time critical, so we can afford to delay one and allow the other to finish.

  • alen teplitsky

    I agree with you. We have a centralized SQL Server that runs only SQL Jobs, it runs all our Import/Export/ODS ETL jobs. We leverage SSIS Packages and then these get called in sequence as a single Job or different jobs . Although the packages will point to different servers through the Connection Managers but they ultimately get called from a single centralized SQL Server.

    Thanks,

    Amol Naik

  • As this mechanismn is transaction or session relevant,

    you cannot use it over two processes.

    Just a hint (was mssing in the story): You can check these locks with

    select APPLOCK_MODE('dbo','MyMutex','Session')

    select APPLOCK_TEST('dbo','MyMutex','Exclusive','Session') (using your parameters of course)

    kr/Werner

  • @Werner

    Not sure what you mean by

    As this mechanismn is transaction or session relevant,

    you cannot use it over two processes.

    You can create a Lock / Semaphore / Mutex with one process that is visible to another. That was the point of the article!

    I hope I've mis-understood your post!

  • @Toby

    you are totally right, it is visible in different processes.

    kr/Werner

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply