Transactions 2

  • Might be slightly off-topic, but I found this write-up useful in explaining the benefits -- and issues -- related to XACT_ABORT and TRY..CATCH: http://www.simple-talk.com/sql/t-sql-programming/defensive-error-handling/[/url].

    For example, there's a nice demonstration of how XACT_STATE() is needed to catch uncommittable transactions, following something as simple as a failed type conversion from string to integer [SELECT CAST('abc' AS INTEGER)].

    Rich

  • In which case encapsulating the transaction with a BEGIN TRY/CATCH and a ROLLBACK TRANSACTION in the CATCH does do what a lot of people apparently expected here.

    I shall have to investigate the XACT_ABORT, that's interesting.

  • Oops, I see several other people have already proposed what I just wrote. I didn't realise this thread was so long, apologies!

  • Thanks for the question.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • roger.plowman (5/23/2012)


    Having the default setting to OFF basically break transactions and is a *bug* in SQL Server.

    Explicit transactions should be exactly that, explicit.

    That's the whole point of course: they are explicit. Commitment is done explicitly, has to be coded. Rollback, unless the developer/dba specifies otherwise or something really nasty happens, is also done explicitly and has to be coded. If you want trivial errors to do roll-back automatically, rather than signalling an error inside the transaction context so as to enable your error handling code to see the modifications made so far by the transaction, you have the option of making that happen (set XACT_ABORT ON); people who are used to managing errors and doing error correction where possible and error containment where correction is impossible will usually want the default behaviour.

    This stupidity is exactly the kind of thing that will cause nearly untraceable bugs in code. What were they thinking?

    Perhaps they were thinking about enabling the developer to do proper error management? The only stupidity here is calling an extremely useful feature )one that enables far more robust things to be developed) a "bug".

    Tom

  • Interesting question... Thanks one more time, Ron!

  • Good question. I learned something, too.

    But I also agree with the following...

    L' Eomot Inversé (5/23/2012)


    Unfortunately the explanation is all about implicit transactions, while the question is about an explicit transaction. So the explanation is not exactly helpful!

    Rob Schripsema
    Propack, Inc.

  • good question - cheers

  • L' Eomot Inversé (5/23/2012)


    roger.plowman (5/23/2012)


    Having the default setting to OFF basically break transactions and is a *bug* in SQL Server.

    Explicit transactions should be exactly that, explicit.

    That's the whole point of course: they are explicit. Commitment is done explicitly, has to be coded. Rollback, unless the developer/dba specifies otherwise or something really nasty happens, is also done explicitly and has to be coded. If you want trivial errors to do roll-back automatically, rather than signalling an error inside the transaction context so as to enable your error handling code to see the modifications made so far by the transaction, you have the option of making that happen (set XACT_ABORT ON); people who are used to managing errors and doing error correction where possible and error containment where correction is impossible will usually want the default behaviour.

    This stupidity is exactly the kind of thing that will cause nearly untraceable bugs in code. What were they thinking?

    Perhaps they were thinking about enabling the developer to do proper error management? The only stupidity here is calling an extremely useful feature )one that enables far more robust things to be developed) a "bug".

    You assume that the T/SQL code could correctly handle the error. For many applications this is not a good assumption, since the database will not know information the application does, information usually necessary to correct the issue (which is usually some kind of user error).

    If an application hands SQL Server a transaction to process the expected behavior is the transaction will either succeed or fail. There should be no "massaging" of the data to "correct" problems (ie no error handling should be expected on the SQL Server side).

    That's not the database engine's job. The engine should only be concerned whether the data matched all the constraints or not. Anything beyond that is the application's responsibility--which includes error handling. And when I say "application" I mean an external application competely seperate from SQL Server.

    After all, in the example today the database allowed a *partial* transaction to commit (by default!). That violates the definition of a transaction--and thus is a bug, not a feature.

  • roger.plowman (5/23/2012)


    And when I say "application" I mean an external application competely seperate from SQL Server.

    If you want the application to be outside SQL Server, you should avoid all programmability features T-SQL offers you and standalone queries only. You should even control the transactions from the application.

    Conversely, if you use T-SQL to code logic beyond the basic database building block (query), you are bringing part of the application to SQL Server.


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Funny...I knew the answer from having attended Denise's class on transactions at the San Diego SQL Saturday last fall.

  • roger.plowman (5/23/2012)

    After all, in the example today the database allowed a *partial* transaction to commit (by default!). That violates the definition of a transaction--and thus is a bug, not a feature.

    No, it didn't allow a partial transaction to commit by default. The batch contained explicit begin/commit transaction statement so SQLServer did exactly as the developer told it to. SQLServer did not decide to commit the transaction automatically, it executed the commit statement in the code, so the code written by the developer did it. That's not a bug in SQLServer, it's a bug in the developers code assuming this is not what they want to happen.

    You seem to be getting this whole QOD confused with implicit transactions and autocommit. This is not an implicit, autocommit scenario; the code contains an explicit begin/commit with no error handling and so the code did exactly as the documentation says it will do. The error was at the statement level and so needs error handling if you want to rollback the entire transaction. If you are silly enough to use an explicit commit in code without error handling don't blame SQLServer for the result. A poor workman blames his tools. You need to understand how the tools work and use them accordingly. There is no bug, just poorly written SQL in this case, which is the point the QOD was trying to make.

    Cheers

    Roddy

  • roger.plowman (5/23/2012)


    L' Eomot Inversé (5/23/2012)


    roger.plowman (5/23/2012)


    Having the default setting to OFF basically break transactions and is a *bug* in SQL Server.

    Explicit transactions should be exactly that, explicit.

    That's the whole point of course: they are explicit. Commitment is done explicitly, has to be coded. Rollback, unless the developer/dba specifies otherwise or something really nasty happens, is also done explicitly and has to be coded. If you want trivial errors to do roll-back automatically, rather than signalling an error inside the transaction context so as to enable your error handling code to see the modifications made so far by the transaction, you have the option of making that happen (set XACT_ABORT ON); people who are used to managing errors and doing error correction where possible and error containment where correction is impossible will usually want the default behaviour.

    This stupidity is exactly the kind of thing that will cause nearly untraceable bugs in code. What were they thinking?

    Perhaps they were thinking about enabling the developer to do proper error management? The only stupidity here is calling an extremely useful feature )one that enables far more robust things to be developed) a "bug".

    You assume that the T/SQL code could correctly handle the error.

    No, I make no such assumption. I look at the code in the question and see a number of things:-

    (i) the T-SQL code in the database, not some higher layer, determines the transaction boundaries.

    (ii) the T-SQL code for the insertion is not a single query, as it could easily have been, it is multiple statements. Presumably the reason for that is that the semantics of multiple inserts is required, rather than the semantics of single row inserts.

    (iii) From the two points above, it's quite clear that the T-SQL component has responsability for some of the application logic.

    That's deduction from very clear evidence, not =any sort of assumption.

    For many applications this is not a good assumption, since the database will not know information the application does, information usually necessary to correct the issue (which is usually some kind of user error).

    Where the SQL layer doesn't know enough to do anything useful with an error, XACT_ABORT should be on. As someone once said (back in sql 2000 days), the correct spelling of "BEGIN TRAN" in those circumstances is "SET XACT_ABORT ON; BEGIN TRAN".

    However, this can make real time error containment very difficult, since the information that gets back to the application will generally be a pretty general purpose error message (one that's built into the data engine); it is often very useful for the SQL layer to generate some extra information (like how far the transaction got, and perhaps which data values or parameters led to the error) that will enable the application to do better real time error containment and/or recovery, even if it can't do the containment or recovery itself. It is a good general rule that information describing the error and how it came about should be generated as low down in the system as is possible, so that containment and recovery can be effective.

    Since 2005 we've had modern (ie post-Cobol) error handling in T-SQL, with TRY-CATCH, so it is now very easy to run some code in a catch block within the transaction after the error to collect (in variables, perhaps including table variables, of course) extra information about the state of the world at the time of the error and then after rolling the transaction back and rethrowing the error either log that information or pass it out to the application, while in the bad old days we had to look at @@error and mess about with IFs to achieve a similar effect.

    If an application hands SQL Server a transaction to process the expected behavior is the transaction will either succeed or fail. There should be no "massaging" of the data to "correct" problems (ie no error handling should be expected on the SQL Server side).

    What, not even reporting to the application that there has been an error? That is error handling, you know.

    That's not the database engine's job. The engine should only be concerned whether the data matched all the constraints or not. Anything beyond that is the application's responsibility--which includes error handling. And when I say "application" I mean an external application competely seperate from SQL Server.

    But constraints (other than auto-generated primary keys) are determined by the business rules for the applications, so you are saying the database engine must not tcheck foreign key constraints, enforce default constraints, check unique and not null constraints, or check any primary key constraint other than for an automatically generated primary key. Rolling back a transaction when a unique constraint is violated (which will happen if XACT_ABORT is on) is error handling, and it is also something you are claiming the database engine ought always to do. So you can't assert reasonably that it must do no error handling.

    After all, in the example today the database allowed a *partial* transaction to commit (by default!). That violates the definition of a transaction--and thus is a bug, not a feature.

    Your choice of phase, "partial transaction", is rather strange when used for a transaction which has been committed. If you decide to invent terms like that you can pick ones that appear to support any position you choose, but I certainly don't regard that as eitherlogical or persuasive. The definition of a transaction is that it is a discrete piece of work that has the usual ACID properties. Can you explain why you believe that the transaction presented in this QoD fails to possess those properties, so that it violates that definition?

    Maybe you want T-SQL to be a language in which it is impossible to write bad code (of which the code in the QoD is probably an example, as it has a multi-statement explicit transaction without any attempt to handle statement-level errors). It would be nice to have such a language, but I don't know anyone who has worked on computer languages who believes that such a language is possible.

    Tom

  • I picked the right answer of 2 rows, but it was not because of the Autocommit feature but there is no error checking done - either checking for @@ERROR or a TRY/CATCH. This means every statement after the one that caused the duplicate error is still going to be executed.

    This is whay the 3rd insert inserted the 2nd record and then the transaction was commited.

    😀

  • Good question and great discussion. Thanks!

    :exclamation: "Be brave. Take risks. Nothing can substitute experience." :exclamation:

Viewing 15 posts - 31 through 45 (of 56 total)

You must be logged in to reply to this topic. Login to reply