July 27, 2010 at 5:08 pm
Hi all,
I have found this really weird behavior thats kind of haunting me for more than a week.
I have a .NET process that writes records to a table in two transactions using the classic Data Adapter and dataset. Using Visual Studio 2008, .NET framework 3.5 SP1 and SQL Server 2008 Cluster with 32GB of RAM on server.
tran1 = conn.beginTransaction()
da.update(ds)
tran1.commit()
. In the first, it inserts around 20,000 rows and commits and in the second it inserts around 6,000 and commits.
When I run this process stand alone, it works fine. I see all the records in the table.
I was trying to stress test this to see how many instances are supported.
When I went to 25 instances of this process(Each writing 20k and 6k records in two transactions), I saw something strange. Out of the 25 instances, for 4 instances, the transaction writing 20k records was rolled back mysteriously. I did not get any errors(which I was expecting). Worse, as these 25 instances were running, I was running a select from table(with nolock), and the record count was growing for all the records inserted into those 25*2 transactions. However when they all ended(without any exception/error) and I run the same select as I was running all the time, the records for around 4 transactions which could be seen in the query before, have now disappeared without a trace!!!.
1. Is there a limit to the number of record inserts SQL server 2008 can handle.
2. Even if there is, how come I have issued the tran1.commit(), code doesn't give an exception, but still doesn't stay in the table.
3. Where did the records which I can see in the select (with nolock) while the processes are running disappear after the processes ended?
4. Is there some place where it logs this error, so that I can retry which were rolled back?
I could run 20 instances without a problem, but saw this roll back behaviour when I increased the load to 25 instances.
July 27, 2010 at 11:44 pm
mdfaraz (7/27/2010)
1. Is there a limit to the number of record inserts SQL server 2008 can handle.
Only limited by your hardware, but it's not a few thousand. I've inserted millions of rows in a single statement before
2. Even if there is, how come I have issued the tran1.commit(), code doesn't give an exception, but still doesn't stay in the table.
Something must have caused a rollback, or the tran1.commit() was never reached and then the connection closed.
3. Where did the records which I can see in the select (with nolock) while the processes are running disappear after the processes ended?
Because when a rollback occurs, everything that was done in the transaction must be rolled back. That means all the rows inserted must be removed.
4. Is there some place where it logs this error, so that I can retry which were rolled back?
Try running SQL Profiler. If you include the batch completed event and all the error events you should see something. Sure it wasn't something like a connection drop? That would cause a rollback. Also anything like a constraint violation, data type conversion, primary key violation, out of space in data file, out of space in log file, etc would cause a rollback.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 28, 2010 at 4:39 pm
Hi there,
Thanks a ton for your reply. I appreciate your interest in resolving DB issues!! I added my comments in bold...man I cant get this color thing to work!!. Please read my update below
mdfaraz (7/27/2010)
________________________________________
1. Is there a limit to the number of record inserts SQL server 2008 can handle.
Only limited by your hardware, but it's not a few thousand. I've inserted millions of rows in a single statement before
mdfaraz: I would tend to agree, but definitely, there’s something fishy
2. Even if there is, how come I have issued the tran1.commit(), code doesn't give an exception, but still doesn't stay in the table.
Something must have caused a rollback, or the tran1.commit() was never reached and then the connection closed.
mdfaraz: On the .NET end, I saw no exceptions when trying to commit the two transactions(one with 20k and other with 6k). The code is designed as such that if the first transaction fails, the other wont begin. The fact that the records from the second transaction(6k) are never rolled back. This tells me that the 20k transaction was successful.
3. Where did the records which I can see in the select (with nolock) while the processes are running disappear after the processes ended?
Because when a rollback occurs, everything that was done in the transaction must be rolled back. That means all the rows inserted must be removed.
mdfaraz: How can this happen when the commit for 20k transaction was given and then the 6k record tran kicked in? was it committed in tempdb and rolled back when trying to move to SAN. Were the results of my select queries coming from DB cache and not SAN?
4. Is there some place where it logs this error, so that I can retry which were rolled back?
Try running SQL Profiler. If you include the batch completed event and all the error events you should see something. Sure it wasn't something like a connection drop? That would cause a rollback. Also anything like a constraint violation, data type conversion, primary key violation, out of space in data file, out of space in log file, etc would cause a rollback.
mdfaraz: My DBA ran the profiler for a bunch of variables and all I could see where a lot of commits/rollbacks and other stuff which was very difficult to pinpoint. Also adding the profiler significantly reduced the performance of these processes.
I didn't see anything in the Event Logs. I have the entire transaction in a Try/Catch block in .NET, but it doesn't throw any exception on da.update(ds) or tran1.commit().
Another update to this issue is
1. Like I said, I could run 20 instances without a problem, but run into these mysterious roll back at 25. This is without the profiler.
2. I started the profiler on a remote machine(physically located states apart from the DB cluster) and it could run 1 instance and do the same behaviour when I go to 2 instances
3. I start the profiler on a remote machine(physically close to the DB cluster) and it could run 5 instances fine but could not go over 10
4. so, there is something related to the activity/logs on the db which are contributing to this behaviour
Would be great to know whats going on.
July 29, 2010 at 2:41 am
Could you please edit that and use the [ quote ] blocks? It's very hard to read who said what.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 29, 2010 at 3:39 am
Probably a deadlock.
Using a NOLOCK hint (also known as READ UNCOMMITTED) allows you to read 'dirty' data - data that has not been committed yet. You saw rows that were later rolled back.
July 29, 2010 at 4:54 am
Also check that you aren't enabling implicit transactions as a connection option and that there's no other wrapping transaction (no begin transaction statements that don't have an associated commit)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 29, 2010 at 9:15 am
I updated my reply with bold..Couldn't get the red color to work!!
July 29, 2010 at 9:32 am
mdfaraz (7/29/2010)
I updated my reply with bold..Couldn't get the red color to work!!
What's wrong with using the quote codes? To change colour, you need to wrap "" around the #FF0000.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 29, 2010 at 9:38 am
mdfaraz (7/28/2010)
How can this happen when the commit for 20k transaction was given and then the 6k record tran kicked in? was it committed in tempdb and rolled back when trying to move to SAN. Were the results of my select queries coming from DB cache and not SAN?
No and no. If the outer transaction commits, it's committed. That's the durability property in ACID.
Is it possible that you somehow had either a nested transaction or had implicit transactions on? Am wondering because if the connection is closed while there's an uncommitted transaction, that will be rolled back.
So, if you had something like this happening
begin transaction
begin transaction
insert 20k
commit transaction
begin transaction
insert 6k
close connection
everything back to the first commit would be rolled back, because the final commit of the outer transaction was never received. Within nested transactions, commits only decrement the transaction count until it reaches 0. After that, the final commit occurs and actually commits the tran
My DBA ran the profiler for a bunch of variables and all I could see where a lot of commits/rollbacks and other stuff which was very difficult to pinpoint. Also adding the profiler significantly reduced the performance of these processes.
Yes, profiler does add overhead, especially if you trace too much.
Try just tracing for the error events, see if any fire.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 29, 2010 at 1:33 pm
In my scenario, I have
Open Connection
Begin transaction
Insert 20k
commit Transaction
Close Connection
Open Connection
Begin Transaction
Insert 6k
Commit Transaction
Close Connection
I dont get an exception on any of my two commits. but when i run multiple instances, the records inserted in 20k transaction are rolled back/not seen in the table.
So my concern is how a committed transaction can be rolled back. Am sure I am missing something bcoz as u said this is against the ACID properties of a relational DB
Viewing 10 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy