Proper Design pattern for Multiuser Handling (Concurrency)

  • In our project following are the situations where concurrency handling needed.

    1) Data Retrieved in Grid by multiple users & saved with the help of stored procedure. But first user should be able to update other should get message "Data has been changed by other user, Please retrieve again."

    2)There is process button on screen, based on some input selection for the stored procedure, multiple users press the same button & procedure would do some calculation & updates different tables. Requirement only first user should be able to process. others should get message "Another user is doing same process try later." or on completion of first user it should say "Process already done for the selected filter, cannot process".

    3) Also doing above updation & process if we want to restrict other users from read & write only those columns which are in update till the updation/process is not completed (To avoid dirty read) how it can be done.

    4) We dont want to database level isolation level. its default Read commited.

    Please help. clarify in detail & if you have relavent reading material then please suggest.

  • pravin.sawant99 (9/18/2010)


    1) Data Retrieved in Grid by multiple users & saved with the help of stored procedure. But first user should be able to update other should get message "Data has been changed by other user, Please retrieve again."

    This might be most easily handled by running a calculated field for checksum or having the update/insert statements set a checksum that's against the entire row (minus the checksum field). Basically it's a hash for the row that you can double check against to see if there was an edit since last pickup. You could also use a timestamp and compare that to make sure nothing had changed.

    2)There is process button on screen, based on some input selection for the stored procedure, multiple users press the same button & procedure would do some calculation & updates different tables. Requirement only first user should be able to process. others should get message "Another user is doing same process try later." or on completion of first user it should say "Process already done for the selected filter, cannot process".

    Best tool for this is sp_getapplock. It's a fake object in memory that you manually (well, procedurally) set locked/unlocked to stop concurrent process activity.

    3) Also doing above updation & process if we want to restrict other users from read & write only those columns which are in update till the updation/process is not completed (To avoid dirty read) how it can be done.

    4) We dont want to database level isolation level. its default Read commited.

    Please help. clarify in detail & if you have relavent reading material then please suggest.

    4 seems to relate to 3, but I'm not entirely sure what you're asking for there.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • pravin.sawant99 (9/18/2010)


    In our project following are the situations where concurrency handling needed.

    May I ask a basic question?

    Could you please share with us what's the expected level of concurrency?

    - How many total users?

    - How many users actually hitting the database at the same time?

    - How many users actually hitting the same row at the same time?

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • Hi,

    The expected level of concurrency

    - How many total users = 25

    - How many users actually hitting the database at the same time = 10

    - How many users actually hitting the same row at the same time = 10

    One thing is that we want to avoid consequences of concurrency. Above figures may or maynot arise but for precautions we need to implement it. There are some processes if that execute concurrently by different users then the same row value can be read/write that is undersirable. So i want to know the various different techniques & also suggest the situations in which different techniques are suitable?

  • pravin.sawant99 (9/19/2010)


    Hi,

    The expected level of concurrency

    - How many total users = 25

    - How many users actually hitting the database at the same time = 10

    - How many users actually hitting the same row at the same time = 10

    One thing is that we want to avoid consequences of concurrency. Above figures may or maynot arise but for precautions we need to implement it. There are some processes if that execute concurrently by different users then the same row value can be read/write that is undersirable. So i want to know the various different techniques & also suggest the situations in which different techniques are suitable?

    Please do not take it the wrong way but if ten out of a total user community of twenty-five are hitting the same row of the same table at the same time something might be wrong with the overall design.

    Even if stated concurrency is true, for an OLTP system this is a very low level of concurrency second only to none - the magic of rdbms locking will handle it for you. No need to over-engineer the project, do not add manual locking, make your transactions as small as possible and let SQL Server handle it.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • Its all right but i want to know the techniques to handle concurrency i.e. Lost updates

  • The checksum and timestamps techniques described above can help you stop concurrency issues.

    If you are going to allow the updates and just want to be able to look at them for "lost updates" then you should search for auditing strategies.

    A couple of which would be:

    1. Save every record and always get the lastest record for display / processing.

    2. Process normally and write the old record to an audit table.

    There are many strategies for auditing that can be considered depending on your exact needs.

  • pravin.sawant99 (9/20/2010)


    Its all right but i want to know the techniques to handle concurrency i.e. Lost updates

    In regards to concurrency, concurrency is handled by locking/latching - these are well documented features of any and all rdbms.

    In regards to "lost updates", transactions are either commited or rolledback - in both cases rdbms lets you know what happened.

    Hope this helps.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply