Constraint to prevent UPDATE after row is committed

  • Hi,

    I'm wanting to prevent updates to a certain table after the row has been committed to the database. Updates that occur within the same transaction as the INSERT should be permitted. I have a solution for this, but I'm wanting to know if anyone knows of a better way to implement the constraint.

    Background:

    I have a table that stores blob data and I'm wanting to ensure that this data is immutable - once the row is committed, it should not be possible to update the blob data. The application was written to stream large blob data values in chunks using the UPDATE statement with the .WRITE clause. I want to allow these updates to succeed when they occur within the transaction that inserted the row. Once the row is committed any further updates should fail.

    Sample DDL (Simplified):

    CREATE TABLE dbo.FileData(

    ID UNIQUEIDENTIFIER PRIMARY KEY,

    DATA VARBINARY(MAX) NOT NULL

    );

    GO

    CREATE PROC dbo.FileData_Upd(

    @ID UNIQUEIDENTIFIER,

    @AppendData VARBINARY(MAX)

    )

    AS

    UPDATE dbo.FileData

    SET DATA.WRITE(@AppendData,NULL,0)

    WHERE ID = @ID;

    Test Code:

    SET XACT_ABORT ON;

    BEGIN TRAN;

    DECLARE @ID UNIQUEIDENTIFIER;

    SET @ID = NEWID();

    --INSERT "A"

    INSERT INTO dbo.FileData(ID,Data)

    VALUES(@ID,0x41);

    -- Append "B"

    EXEC dbo.FileData_Upd @ID=@ID,@AppendData=0x42;

    -- Append "C"

    EXEC dbo.FileData_Upd @ID=@ID,@AppendData=0x43;

    -- Transaction committed at this point. Additional updates should be rejected

    COMMIT;

    -- Select should return ABC

    SELECT Data,CAST(Data AS VARCHAR(MAX))

    FROM dbo.FileData

    WHERE ID = @ID;

    -- This update should fail

    EXEC FileData_Upd @ID=@ID,@AppendData=0x44;

    -- This should not return ABCD

    SELECT Data,CAST(Data AS VARCHAR(MAX))

    FROM dbo.FileData

    WHERE ID = @ID;

    This sample code will return:

    Data VarData

    0x414243 ABC

    Data, VarData

    0x41424344 ABCD

    The last update should not succeed though - it should only return the first resultset with an error message.

    This is my solution to the problem:

    ALTER TABLE dbo.FileData

    ADD TransactionID BIGINT NULL;

    GO

    UPDATE dbo.FileData

    SET TransactionID = -1;

    GO

    ALTER TABLE dbo.FileData

    ALTER COLUMN TransactionID BIGINT NOT NULL;

    GO

    CREATE TRIGGER FileData_Insert

    ON dbo.FileData

    INSTEAD OF INSERT

    AS

    /*

    Trigger to add TransactionID value to FileData inserts

    */

    DECLARE @TransactionID BIGINT

    SELECT @TransactionID = transaction_id

    FROM sys.dm_tran_current_transaction; -- **Requires VIEW SERVER STATE permissions**

    INSERT INTO dbo.FileData(ID,Data,TransactionID)

    SELECT ID,Data,@TransactionID

    FROM INSERTED;

    GO

    CREATE TRIGGER FileData_Update

    ON dbo.FileData

    AFTER UPDATE

    AS

    /*

    Trigger to prevent updates to FileData table if transaction is committed

    */

    DECLARE @TransactionID BIGINT

    SELECT @TransactionID = transaction_id

    FROM sys.dm_tran_current_transaction -- **Requires VIEW SERVER STATE permissions**

    -- Update should not succeed if TransactionID column of updated rows does not match the current transaction id

    IF EXISTS(SELECT 1

    FROM DELETED

    WHERE TransactionID <> @TransactionID

    )

    BEGIN

    ROLLBACK TRAN;

    RAISERROR('Updates are not permitted once the transaction is committed.',11,1)

    END

    Cleanup code:

    DROP TABLE dbo.FileData;

    DROP PROC dbo.FileData_Upd;

    Issues with my solution:

    * The transaction_id value from sys.dm_tran_current_transaction gets recycled after a reboot

    - This shouldn't be a big problem as the possibility of an update outside the original transaction having the same transaction id is small.

    * The sys.dm_tran_current_transaction DMV requires VIEW SERVER STATE permissions.

    - I can either grant this to the application login or use a certificate signed stored procedure to grant the required access.

    * Requires schema changes to the FileData table and two triggers

    - This is ok if there isn't a simpler more elegant solution available.

    I'm just wondering if anyone has any different ideas about how to solve this problem?

    Many Thanks,

    David

    DBA Dash - Free, open source monitoring for SQL Server

  • "The last update should not succeed" - why?

    In your example there is no update that would not succeed.

    And there is no command that inserts "ABCD".

    First INSERT sets "A", then UPDATE adds "B", another UPDATE adds "C", and there is no update that adds "D".

    _____________________________________________________
    Microsoft Certified Master: SQL Server 2008
    XDetails Addin - for SQL Developers
    blog.sqlxdetails.com - Transaction log myths
  • Surely it's easier just to DENY UPDATE to all users? Obviously db_owners and sysadmins will still be able to do updates - that may or may not be an acceptable risk to you.

    John

    Edit -you would also need to rewrite any code where an UPDATE is done after an INSERT in the same transaction.

  • Vedran Kesegic (3/14/2012)


    "The last update should not succeed" - why?

    In your example there is no update that would not succeed.

    And there is no command that inserts "ABCD".

    First INSERT sets "A", then UPDATE adds "B", another UPDATE adds "C", and there is no update that adds "D".

    Hi,

    This is the update that appends "D":

    -- This update should fail

    EXEC FileData_Upd @ID=@ID,@AppendData=0x44;

    Sorry, I could have made that a bit clearer.

    The B and C appends should work because they are done within the same transaction as the INSERT. The transaction is committed after C is appended, so at this point no further updates should be posible. The append "D" is run after the row has been committed to the database, so this should fail.

    If I deny permission to update the table, this would prevent the append "B" and append "C" commands from running, which I want to allow. The app only has permissions to update the table via the SP, but the SP is required to allow the append "B" and append "C" commands. For the same reason I can't use a simple trigger to rollback updates as it would prevent the append "B" and append "C" from running.

    I know this is a bit of a strange requirement - the example is just to illustrate what I'm trying to do. Here's some additional background info:

    The application allows users to upload files which get stored in a SQL Server database. To allow the application to cope with larger files, they are not inserted into the database using a single INSERT statement. The files are streamed into the database using an initial INSERT followed by a number of UPDATE statements to append the rest of the file data in chunks. This is done as a single transaction. I'm wanting to make sure that a file can never be updated once it's been committed to the database.

    The constraint is mostly to pick up any bugs in the application that would allow an update to an existing file in our DEV/QA environment and to give us a guarantee that the file data is immutable in our live environment.

    Thanks,

    David

    DBA Dash - Free, open source monitoring for SQL Server

  • Let's assume the requirement to store the file in a table is really required (instead of just the insert of a link to the file system...).

    If it needs to be done in chunks I would rather prefer to store each chunk in a separate row with an additional column to identify the order of each section.

    I would then create a view that would concatenate the strings.

    Then I would allow only inserts and deny any updates.



    Lutz
    A pessimist is an optimist with experience.

    How to get fast answers to your question[/url]
    How to post performance related questions[/url]
    Links for Tally Table [/url] , Cross Tabs [/url] and Dynamic Cross Tabs [/url], Delimited Split Function[/url]

  • LutzM (3/14/2012)


    Let's assume the requirement to store the file in a table is really required (instead of just the insert of a link to the file system...).

    If it needs to be done in chunks I would rather prefer to store each chunk in a separate row with an additional column to identify the order of each section.

    I would then create a view that would concatenate the strings.

    Then I would allow only inserts and deny any updates.

    The files are stored in the DB as the application makes use of full-text indexing. We could look at using something like lucene to index our files, allowing us to move the files outside the DB. This is a major architectural change though - not something we could consider at this point in time (plus, there are some benefits to storing files in the DB and integrated full-text search)

    The file data needs to be stored together for full-text indexing to work so it's not possible to store the chunks in separate rows.

    Thanks,

    David

    DBA Dash - Free, open source monitoring for SQL Server

  • After your explanation I understand that you want to allow updates that are within the same transaction as INSERT command.

    You could read unique transaction id and in INSERT procedure store it together with the data.

    Then in each UPDATE procedure also get the current transaction id, and update will succeed only if transaction id matches with initial insert transaction id already stored there

    (simple WHERE condition).

    The only thing problematic here is to get current transaction id. You can do it like this:

    select transaction_id from sys.dm_tran_current_transaction;

    Complete example is here:

    http://www.sqlservercentral.com/Forums/Topic610168-338-2.aspx

    Or this: SELECT transaction_id FROM sys.dm_exec_requests

    But that requires special permissions.

    Personally, I did it with DBCC command, I think this is the one (I can look tomorrow to confirm):

    http://msdn.microsoft.com/en-us/library/ms182792.aspx

    -- Create the temporary table to accept the results.

    CREATE TABLE #OpenTranStatus (

    ActiveTransaction varchar(25),

    Details sql_variant

    )

    -- Execute the command, putting the results in the table.

    INSERT INTO #OpenTranStatus

    EXEC ('DBCC OPENTRAN WITH TABLERESULTS, NO_INFOMSGS');

    -- Display the results.

    SELECT * FROM #OpenTranStatus;

    GO

    Even without a real transaction id, you could return SCOPE_IDENTITY() of inserted row, and pass the same id to each update.

    As Lutz suggested, much efficient is to store large binary data into file system in to FILESTREAM.

    It is included in transaction, it is included in the backup, and it also can be indexed by a full-text index.

    If you decide to to implement it like that (when you will have more time), here is example of full-text index on filestream:

    http://arcanecode.com/2009/05/28/full-text-searching-a-filestream-varbinary-max-column/

    ------

    UPDATE: i see you already used dm view to get transaction id. You could try DBCC version, and not using trigger at all. All checks are through procedures for insert and update, and direct write access to the table should be revoked.

    _____________________________________________________
    Microsoft Certified Master: SQL Server 2008
    XDetails Addin - for SQL Developers
    blog.sqlxdetails.com - Transaction log myths
  • wiseman82 (3/13/2012)


    Hi,

    I'm wanting to prevent updates to a certain table after the row has been committed to the database.

    Keep it simple, then. Put an "INSTEAD OF" trigger on the final table to reject updates. Insert into a Temp Table and do all of your updates there. When done, insert from the temp table into the final table. It'll even prevent ad hoc manual updates using SSMS.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I wud go with the instead of trigger if this wasnt a frequent activity. else i wud create a new column called is modifiable. and update it with 0 just before i commit. Any subsequent updates will naturally need to pefrom a check on this flag in order to decide.

    Jayanth Kurup[/url]

  • Jayanth_Kurup (3/15/2012)


    I wud go with the instead of trigger if this wasnt a frequent activity. else i wud create a new column called is modifiable. and update it with 0 just before i commit. Any subsequent updates will naturally need to pefrom a check on this flag in order to decide.

    I would do the same, but instead of making all subsequent updates to check the flag (as you don't know who and when will write such updates), I would have a trigger on UPDATE and which would check this "flag" column in DELETED and reject the update if it's set. That would definitely prevent all subsequent updates...

    _____________________________________________
    "The only true wisdom is in knowing you know nothing"
    "O skol'ko nam otkrytiy chudnyh prevnosit microsofta duh!":-D
    (So many miracle inventions provided by MS to us...)

    How to post your question to get the best and quick help[/url]

  • Jayanth_Kurup (3/15/2012)


    I wud go with the instead of trigger [font="Arial Black"]if this wasnt a frequent activity[/font]. else i wud create a new column called is modifiable. and update it with 0 just before i commit. Any subsequent updates will naturally need to pefrom a check on this flag in order to decide.

    Why??? Do you think that triggers make things slow? Well written trigger code will work just as fast (maybe faster in this case) than a "flag checker" and will do something that flag checkers in code won't do... prevent ALL updates even from casual sources.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (3/15/2012)


    Jayanth_Kurup (3/15/2012)


    I wud go with the instead of trigger [font="Arial Black"]if this wasnt a frequent activity[/font]. else i wud create a new column called is modifiable. and update it with 0 just before i commit. Any subsequent updates will naturally need to pefrom a check on this flag in order to decide.

    Why??? Do you think that triggers make things slow? Well written trigger code will work just as fast (maybe faster in this case) than a "flag checker" and will do something that flag checkers in code won't do... prevent ALL updates even from casual sources.

    I would not think that INSTEAD OF trigger will make things slow, but it may not be possible to create one for logic required by the OP (it's hard to judge, with whatever shown in example it can be done). My understanding is that record is inserted and then "blob" is appended in the same transaction, all other appends (updates) should be rejected. Implementing it with "instead of" trigger will require that single insert statement is issued. If it's possible then fine! But you will not need "instead of" trigger in this case, you can just deny updates for the table to everyone or have standard on UPDATE trigger which will raise error.

    The above implementation can be faster than a "flag checker" and it will not require redundant column. However "flag checker" method also can be used to prevent ALL subsequent updates (eg. from casual sources) if the "flag check" is implemented in update trigger...

    _____________________________________________
    "The only true wisdom is in knowing you know nothing"
    "O skol'ko nam otkrytiy chudnyh prevnosit microsofta duh!":-D
    (So many miracle inventions provided by MS to us...)

    How to post your question to get the best and quick help[/url]

  • Hi,

    Thanks for the suggestions.

    Vedran - That's more or less the same solution that I came up with, but it's good to see that someone else was thinking along the same lines. I opted to do the check in the trigger as the constraint would apply even if someone tried to update the data without using the SP. The problem with using something like SCOPE_IDENTITY() is that the solution would need to be driven from the application and it doesn't really prevent updates. DBCC OPENTRAN requires either sysadmin or db_owner permissions and I wouldn't want to grant either to the application login. We are not able to implement FILESTREAM as we are still using SQL 2005. Also, FILESTREAM has a major limitation that it can't be used with DB Mirroring.

    Jeff - I had thought of using something similar to what you suggested - Using a staging table to stream the file data then transfer it to the live table once the row is committed. The problem is how to trigger the transfer from the staging table to the live table when the transaction is committed. I'm guessing the application would need to be changed to provide an indication that it's finished streaming the data - I was hoping to implement the constraint in SQL without any app changes. As far as I know there isn't an easy way to transfer the data from the staging area to live once the transaction is committed (It could be done with an agent job or maybe the service broker). The other issue is the extra logging that would be involved with this solution.

    Jayanth/Eugene - The issue with using a flag is that there is no way to set the flag automatically after the transaction is committed. The application would need to provide an indication that it's finished streaming the data. The flag can't be set in a trigger as the flag will be seen by the "B" and "C" appends causing them to fail as well.

    The flag based approach might be an option if I'm able to relax the restriction that no updates are possible after the row has been committed to the DB. In which case the flag could be set using the SQL Server agent. Ideally I want to prevent updates as soon as the transaction has been committed - my initial solution does that, but I thought there might have been an better solution that I was missing.

    Thanks again,

    David

    DBA Dash - Free, open source monitoring for SQL Server

  • PS

    The transactionid solution is really just a different version of the flag based solution. It seems a bit crazy that you need VIEW SERVER STATE permissions to get your own transaction id value. I'm guessing there is a reason for this though and it's not a common requirement to want to know your own transaction id value.

    DBA Dash - Free, open source monitoring for SQL Server

  • wiseman82 (3/15/2012)


    Jeff - I had thought of using something similar to what you suggested - Using a staging table to stream the file data then transfer it to the live table once the row is committed. The problem is how to trigger the transfer from the staging table to the live table when the transaction is committed. I'm guessing the application would need to be changed to provide an indication that it's finished streaming the data - I was hoping to implement the constraint in SQL without any app changes. As far as I know there isn't an easy way to transfer the data from the staging area to live once the transaction is committed (It could be done with an agent job or maybe the service broker). The other issue is the extra logging that would be involved with this solution.

    Like I said... "Temp Table and Instead Of Trigger". 🙂

    Here's a setup very similar to what you had. Notice the new trigger. Also notice the proc has been updated to update the Temp Table instead of the final table.

    --===== For test purposes, do this in a nice, safe place.

    -- This particular step is NOT a part of the solution.

    USE tempdb

    ;

    GO

    --===== Create the table like you did before.

    CREATE TABLE dbo.FileData

    (

    ID UNIQUEIDENTIFIER PRIMARY KEY,

    Data VARBINARY(MAX) NOT NULL

    )

    ;

    GO

    --===== Create a trigger to prevent any and all updates to the table

    CREATE TRIGGER dbo.PreventUpdateFileData

    ON dbo.FileData

    INSTEAD OF UPDATE

    AS RAISERROR('WARNING: You cannot update the FileData table!',0,1);

    RETURN

    ;

    GO

    --===== Create the proc like you did before except point it to the temp table

    CREATE PROC dbo.FileData_Upd

    (

    @ID UNIQUEIDENTIFIER,

    @AppendData VARBINARY(MAX)

    )

    AS

    UPDATE #FileData

    SET DATA.WRITE(@AppendData,NULL,0)

    WHERE ID = @ID

    ;

    GO

    Now the test. Since the proc no longer updates the final table, it won't fail here because the Temp Table still exists. It just won't do anything to the final table. If someone tries to use the proc in another session without first building the Temp Table, then it will fail.

    Also notice that I added code to try to update the final table directly. It won't allow it. Check the messages tab for proof.

    --Test Code:

    SET XACT_ABORT ON

    ;

    --===== Conditionally drop the temp table to make reruns in SSMS easier.

    IF OBJECT_ID('tempdb..#FileData','U') IS NOT NULL

    DROP TABLE #FileData

    ;

    CREATE TABLE #FileData

    (

    ID UNIQUEIDENTIFIER PRIMARY KEY,

    DATA VARBINARY(MAX) NOT NULL

    )

    ;

    DECLARE @ID UNIQUEIDENTIFIER;

    SET @ID = NEWID();

    BEGIN TRANSACTION;

    --===== INSERT "A"

    INSERT INTO #FileData(ID,Data)

    VALUES(@ID,0x41)

    ;

    --===== Append "B"

    EXEC dbo.FileData_Upd @ID=@ID,@AppendData=0x42

    ;

    --===== Append "C"

    EXEC dbo.FileData_Upd @ID=@ID,@AppendData=0x43

    ;

    --===== Insert what we collected in the temp table to the final table.

    INSERT INTO dbo.FileData (ID,Data)

    SELECT ID,Data

    FROM #FileData

    ;

    --===== Transaction committed at this point. Additional updates should be rejected

    COMMIT;

    -- Select should return ABC

    SELECT ID,Data,CAST(Data AS VARCHAR(MAX))

    FROM dbo.FileData

    WHERE ID = @ID;

    -- This update should fail

    -- This won't fail because it's not updating the final table anymore.

    EXEC dbo.FileData_Upd @ID=@ID,@AppendData=0x44;

    --This will "fail" because it tries to update the final table.

    --Check the messages table for proof!

    PRINT '===== Trying to update table directly here. ====='

    UPDATE dbo.FileData

    SET DATA.WRITE(0x44,NULL,0)

    WHERE ID = @ID

    ;

    -- This should not return ABCD

    -- and, it doesn't

    SELECT ID,Data,CAST(Data AS VARCHAR(MAX))

    FROM dbo.FileData

    WHERE ID = @ID;

    --===== Drop the temp table to simulate another session.

    IF OBJECT_ID('tempdb..#FileData','U') IS NOT NULL

    DROP TABLE #FileData

    ;

    --===== Try using the proc again. It will fail this time because there's no temp table.

    EXEC dbo.FileData_Upd @ID=@ID,@AppendData=0x44;

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply