Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««34567»»»

Use of Identity Property to Resolve Concurrency Issues Expand / Collapse
Author
Message
Posted Thursday, February 24, 2011 2:12 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Saturday, February 15, 2014 6:29 AM
Points: 4, Visits: 31
Hi,

Can i ask what database you used that had locking hints in 1991?
I thought this was not added until SQL92 in 1992, but 20 years is a long time.


I use SQL server from version 2005, but I use DB-systems from the 90's, nl. Oracle and Interbase.
And Interbase did have some unbelievable powerful transaction management, and Inprise screwed this powerfull system.
And from 2001 I use also open-source Firebird DB (what is a fork of Interbase) and PostgreSQL. I studied the source files of firebird for 8 years and learned a lot how an DB-engine works inside.

And I believe that Interbase/Firebird introduced the system what is introduced in the Yukon engine as read-committed snapshot for over 20 years ago.

Using different database systems give me other ideas how to solve solutions.

And there is a huge difference between optimistic en pessimistic DB-systems.
  • PostgreSQL/ Oracle / Interbase / firebird are optimistic (readers don't block writers and writers don't block readers), but they have special anomalies like read/write skew

  • DB2 / SQL-server are pessimistic DB-servers (a lot of locks)


  • So, locks (or lock hints) aren't always the solution to do the work, only in some very special cases. And DB-systems are getting smarter and smarter if you feed them with correct information (constrainst etc..) to reduce the manual interactions of giving hints.

    I did some sessions in Belgium about the key difference between those systems to database developers.

    Danny
    Post #1069304
    Posted Friday, February 25, 2011 10:33 AM
    Right there with Babe

    Right there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with Babe

    Group: General Forum Members
    Last Login: Wednesday, May 14, 2014 5:43 PM
    Points: 799, Visits: 125
    I think your solution is really good. One more time DBA has to solve issues caused by developers that do not know that database calls are expensive and locking is not so difficult to be implemented on application side. It would be better if the developer creates a call to the database and gets 100 keys at one time that he stores in memory in his application and uses the keys from there. Now instead of 100 calls to the database for each record he makes only one for each 100 records.
    Post #1069733
    Posted Friday, February 25, 2011 10:44 AM
    Forum Newbie

    Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

    Group: General Forum Members
    Last Login: Monday, March 14, 2011 6:16 AM
    Points: 4, Visits: 30
    I am afraid this approach is even worse. What if for each of those keys the developer has to insert some records in a related table, then use those keys to insert related records in a third table? and what if those keys are actually autoincrement values and can't be obtained beforehand? and what if the values have to be generated taking in account the other possible users working in the same time? (such as secvential invoice numbers?)

    Locking must be implemented on database side, period.


    Microsoft MVP 2006-2010
    Post #1069738
    Posted Saturday, February 26, 2011 7:31 AM


    Old Hand

    Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

    Group: General Forum Members
    Last Login: Wednesday, July 9, 2014 3:30 PM
    Points: 352, Visits: 1,868
    Ajit, this is an interesting approach to the problem, but most likely it is just adding complication via over-engineering when a much simpler solution appears to be available.

    1) The most important and easiest thing to do is, as "dvdwouwe 72544" mentioned, create a PK on the "key" column. There is really no reason to have this table without a PK defined. This will also ensure that a duplicate "key" doesn't accidentally get added to the table.

    2) You don't need to worry about the SELECT of the new value getting something updated from another session if you use the OUTPUT clause in the UPDATE as follows:

    UPDATE tbl_kvp
    SET column_value += 1
    OUTPUT inserted.column_value
    WHERE column_key = @key

    This avoids the need for additional locking as "tech.dbmeyer" had suggested.

    3) You can always tell the system to only lock a single row (although given the singular nature of the UPDATE statement I am not sure why it would ever do anything else, but still) by using the WITH (ROWLOCK) hint in the UPDATE as "SanDroid" mentioned.


    Minor note: I find it interesting / amusing that whoever created this table (a developer I think you said) prefixed the column names with "column_".





    SQL# - http://www.SQLsharp.com/
    Post #1070025
    Posted Saturday, February 26, 2011 7:32 PM
    SSC Rookie

    SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

    Group: General Forum Members
    Last Login: Wednesday, September 12, 2012 8:44 PM
    Points: 30, Visits: 145
    Ironically a new version of SQL Server would have an Oracle-style SEQUENCE identity capability, so hopefully there will not be need to "re-invent the wheel"
    Post #1070097
    Posted Saturday, February 26, 2011 8:42 PM
    SSC Rookie

    SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

    Group: General Forum Members
    Last Login: Thursday, May 15, 2014 9:19 PM
    Points: 33, Visits: 278
    Solomon Rutzky (2/26/2011)
    Ajit, this is an interesting approach to the problem, but most likely it is just adding complication via over-engineering when a much simpler solution appears to be available.

    1) The most important and easiest thing to do is, as "dvdwouwe 72544" mentioned, create a PK on the "key" column. There is really no reason to have this table without a PK defined. This will also ensure that a duplicate "key" doesn't accidentally get added to the table.

    2) You don't need to worry about the SELECT of the new value getting something updated from another session if you use the OUTPUT clause in the UPDATE as follows:

    UPDATE tbl_kvp
    SET column_value += 1
    OUTPUT inserted.column_value
    WHERE column_key = @key

    This avoids the need for additional locking as "tech.dbmeyer" had suggested.

    3) You can always tell the system to only lock a single row (although given the singular nature of the UPDATE statement I am not sure why it would ever do anything else, but still) by using the WITH (ROWLOCK) hint in the UPDATE as "SanDroid" mentioned.


    Minor note: I find it interesting / amusing that whoever created this table (a developer I think you said) prefixed the column names with "column_".


    Thanks for your feedback Solomon.

    Even with the primary key in place, if two user sessions would attempt to increment the same key as part of their own transactions, one session would end up blocking the other.

    Ideally, incrementing the KVP value should be done outside transactions, but applications developers seldom adhered to this principle. As I've said in previous posts, the main goals for me were to eliminate this blocking, and also make the incrementing process independent of transactions.

    By making the single horizontal record for a given key vertical, transactions could increment the KVP and obtain appropriate values without blocking each other.


    Ajit Ananthram
    Blog - http://ajitananthram.wordpress.com
    Post #1070100
    Posted Saturday, February 26, 2011 8:54 PM


    SSChampion

    SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

    Group: General Forum Members
    Last Login: Yesterday @ 9:21 PM
    Points: 11,192, Visits: 11,090
    Solomon Rutzky (2/26/2011)
    ...You don't need to worry about the SELECT of the new value getting something updated from another session if you use the OUTPUT clause in the UPDATE as follows:

    UPDATE tbl_kvp
    SET column_value += 1
    OUTPUT inserted.column_value
    WHERE column_key = @key

    This avoids the need for additional locking as "tech.dbmeyer" had suggested.

    Yes, but the problem is that the key-allocation procedure might be called within a transaction. The exclusive row lock needed by the statement above will be held to the end of that transaction. An application that starts a transaction, allocates a key from this procedure, and then sits around waiting for something, will block anyone else needing the next key from the same sequence.

    I wrote about this issue at length (including robust ways to pre-allocate a range of keys without blocking) in:
    http://sqlblog.com/blogs/paul_white/archive/2010/10/19/sequence-tables.aspx




    Paul White
    SQL Server MVP
    SQLblog.com
    @SQL_Kiwi
    Post #1070103
    Posted Saturday, February 26, 2011 9:40 PM


    SSChampion

    SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

    Group: General Forum Members
    Last Login: Yesterday @ 9:21 PM
    Points: 11,192, Visits: 11,090
    It is possible to avoid the dummy column, ever-growing number of rows in the hidden tables, and avoid locking issues by using an approach based on this example code: (the example just uses a single range, but it is easy to extend the idea)

    USE tempdb;
    GO
    -- Drop the table if it exists from a previous run
    IF OBJECT_ID(N'dbo.SneakyIdentity', N'U') IS NOT NULL
    BEGIN
    DROP TABLE dbo.SneakyIdentity;
    END;
    GO
    -- Create the hidden table used to allocate IDs
    CREATE TABLE dbo.SneakyIdentity (row_id BIGINT IDENTITY NOT NULL);
    GO

    -- The allocation routine
    CREATE PROCEDURE dbo.Allocate
    (
    @ID BIGINT OUTPUT
    )
    AS
    BEGIN
    -- No DONE_IN_PROC messages
    SET NOCOUNT ON
    ;
    -- Almost all errors will abort the batch
    SET XACT_ABORT ON
    ;

    -- Table variable used with the OUTPUT clause
    -- to safely read the identity value assigned.
    -- Issues (bugs) have arisen in the past with
    -- @@IDENTITY, SCOPE_IDENTITY, and IDENT_CURRENT.
    DECLARE @Output TABLE
    (row_id BIGINT NOT NULL)
    ;

    -- Start error handler
    BEGIN TRY
    -- Might not be the top-level transaction!
    BEGIN TRANSACTION;

    -- Save point
    SAVE TRANSACTION AllocateID;

    -- Allocate an ID using the 'hidden' table
    INSERT dbo.SneakyIdentity
    OUTPUT inserted.row_id
    INTO @Output (row_id)
    DEFAULT VALUES;

    -- Rollback to the save point
    ROLLBACK TRANSACTION AllocateID;

    -- Correct the transaction count
    COMMIT TRANSACTION;

    END TRY
    BEGIN CATCH

    -- Remember the error message
    DECLARE @ErrMsg NVARCHAR(2048);
    SET @ErrMsg = ERROR_MESSAGE();

    -- Uncommittable transaction => unconditional rollback
    -- (this is the only option)
    IF XACT_STATE() = -1 ROLLBACK TRANSACTION;

    -- If our transaction is still alive, roll back to
    -- the save point, and adjust @@TRANCOUNT
    IF XACT_STATE() = 1
    BEGIN
    -- Rollback to the save point
    ROLLBACK TRANSACTION AllocateID;

    -- Correct the transaction count
    COMMIT TRANSACTION;
    END

    -- Custom error handling goes here
    RAISERROR('Error in allocation: %s', 16, 1, @ErrMsg);

    -- Error
    RETURN 999;

    END CATCH;

    -- Capture assigned value
    -- (Table variables are not affected by transaction rollbacks)
    SET @ID =
    (
    SELECT TOP (1)
    row_id
    FROM @Output
    );

    -- Success
    RETURN 0
    ;
    END;
    GO

    -- === TEST ===

    -- Variables
    DECLARE @ID BIGINT,
    @RC INTEGER;
    ;
    -- Simulate an enclosing user transaction
    BEGIN TRANSACTION
    ;
    -- Allocate the next id
    EXECUTE @RC =
    dbo.Allocate @ID OUTPUT
    ;
    -- Show locks
    SELECT *
    FROM sys.dm_tran_locks AS TL
    WHERE TL.request_session_id = @@SPID
    ;
    -- Show procedure return code and allocated ID
    SELECT return_code = @RC,
    allocated_id = @ID
    ;
    -- Now roll back or commit
    ROLLBACK
    ;
    -- Uncomment this when finished
    -- DROP TABLE dbo.SneakyIdentity;
    -- DROP PROCEDURE dbo.Allocate;





    Paul White
    SQL Server MVP
    SQLblog.com
    @SQL_Kiwi
    Post #1070107
    Posted Monday, February 28, 2011 2:48 PM
    Ten Centuries

    Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

    Group: General Forum Members
    Last Login: Monday, January 27, 2014 10:14 AM
    Points: 1,322, Visits: 1,091
    That's a fairly nice solution to the problem, however...(There's always a but...)

    The biggest problem I see with your approach is that when someone wants to add a new key-value pair they also need to add a new table. I dislike needing to alter the design of the database just because someone wants to add a data value.

    Therefore: Wouldn't it be easier to use a single nextnum table instead of one for each key? Your get sequence number code would work the same and you wouldn't need to use dynamic SQL.
    --
    JimFive
    Post #1070856
    Posted Monday, February 28, 2011 2:57 PM
    SSCrazy

    SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

    Group: General Forum Members
    Last Login: Monday, June 16, 2014 9:38 AM
    Points: 2,163, Visits: 2,189
    James Goodwin (2/28/2011)
    That's a fairly nice solution to the problem, however...(There's always a but...)

    The biggest problem I see with your approach is that when someone wants to add a new key-value pair they also need to add a new table. I dislike needing to alter the design of the database just because someone wants to add a data value.

    Therefore: Wouldn't it be easier to use a single nextnum table instead of one for each key? Your get sequence number code would work the same and you wouldn't need to use dynamic SQL.


    That is what they started with but they had locking and blocking issues while updating that one nextnum table to obtain the next key value.

    I suspect that they would only add a new key when they add a new table, so instead of just adding one table you add two. Is the solution perfect? No, but it fixed the problem they were encountering without having to do a major redesign of the database or the application.
    Post #1070862
    « Prev Topic | Next Topic »

    Add to briefcase «««34567»»»

    Permissions Expand / Collapse