I don't know how people deal with the issue in production database:
Let's say you have a transaction through your web site, in your application program, you need to call different store procedures to insert data to different tables. In case there is one store procedure call failed, what would you do to other tables' data which already successfully insert data to them?
It seems that you can create one store procedure to call all those store procedures in database layer , in case one fails and you can rollback the transactions,but it will have to pass to many parameters to that store procedure, besides, what about you need to call one procedure many times based on the application, (insert an array of data set)?
I had this same problem and unfortunatly we did not find a way to do this other than wrapping all of the procedure calls into one stored procedure. When we had to call a sp several time we simply built a function to loop through the array until all the values had been entered. Once completed the code would then move to the next call.
the only other thing that was considered was to build a procedure that could undo the data changes made to a table should an error be returned. However it ended up being too complicated. How it worked was at the end of each of these procedures we would check for an error and if there was one we would execute a procedure to backout the previous change based on the passed in parameters.
Can't you define a transaction through your application and call all 3 stored procedures from within that one transaction? I don't have alot of experience from the application side of SQL Server, but I believe this is how our developers do it. There are times when we control the transactions using the TRANSACTION, COMMIT, and ROLLBACK keywords inside our stored procedures, but we do have instances where the transaction scope is handled through the ADO connection.
This may be too complicated in your app but I once had to insert to many tables from an app and roll back all if one went wrong.
What I ended up doing was creating temporary duplicates of the tables I was inserting to, writing to these instead, if this was successful, running a single procedure that moved all the data into the real tables.
Because the temp tables were exact matches of the real ones any errors would occur when putting data in them and not at the copying across step, then I could just delete the temp tables when a failure did occur.
The final copying across step I further wrapped in a transaction just in case it ever failed, which it never did as it was a simple 1 to 1 copy of rows from identical tables.
thank you for all your postings and contribution. Yes, basically there should be a way doing this in data access layer not application layer. I got this in microsoft aspnet discussion forum.
In asp.net, there is a class called sqltransaction which has commit and rollback transaction.
So on doubt, we can do this in application other than database layer.
I have not read the entire thread and I am certainly 'Johnny Come Lately' on this one. When I read the reply about iterating through the data to be inserted I felt compelled to reply.
You could (Should) simply declare (depending upon the size of the result set) a variable table or a #TEMP table. Smaller sets go with a Variable table no question. Easier to deal with, is not DEALLOCATED, does not use disc access time as it is memory resident.
Anyway Simply feed your complex data as a | delimitated string, or if multiple layers as XML. From there you read this directly into your variable (or temp) table and insert the data, or work with it, in a set based operation rather than looping.
You don't really want to do this. It provides for more overhead for one thing. Secondly when multiple SP's are in question often times it is a case of say the first SP stores some data and also gets the result of something (sets a bit field, or gets some value or derived value) which perhaps influences the next SP that is called, or if it is called at all, and so on.
If you have what I call a lead SP then it can call what it needs to as well as handle the business rule chain if there is one.
As an example you call your Lead SP. Based upon the data supplied it calls SP 2 and does an insert. Also because of the data it has or perhaps a mix of what it was fed as well as some return value from SP 2 it does not call SP 3 rather it calls SP 5 then 6 and skips 7 because of some accumulated result.
If the outcome had been different with the initial insert using SP2 perhaps it would have called SP 3 and that would have been the end of it.
You really don't need this logic at the application layer. This can all be handled with a single 'transaction' between the client and the DB resulting in a returned result to the client side.
Additionally business rules can be more dynamic without having to make calls to the DB as they can be stored in tables and acted upon within SP's.
Actually this is incorrect, table variables are not memory resident.