June 6, 2007 at 8:39 pm
We have a task that will read an input file (XML) and logically break it into chunks to be loaded in parallel via JAVA/JDBC.
Each chunk is given to a slave JAVA process that may apply business rules before writing the data into the database. The data being written to the database may already exist in the table or it may be new.
We're finding that there's a huge performance/scalability issue. We can't run this load job in parallel. There will be massive deadlock contention or the load job will never complete.
This same task run against an Oracle database will complete very quickly. Against MSSQL 2005, we have to reconfigure the data loader to run in a single thread.
Does anyone have any recommendations to what we can try?
Thanks,
-Peter
June 7, 2007 at 7:03 am
It's probably what I'd expect, can't comment on Oracle - you'd need to examine the process to figure out the locking to avoid this issue - you could try the oracle compat isolation levels in sql to see if this solves the problem.
If your server has hyperthreading you should disable it at the bios as parallel threads will not work correctly ( imbalance in the value of the hyperthreads )
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
June 7, 2007 at 7:56 am
We've done an analysis of the locking. Oracle by default does row-level locking so the contention against the table and indexes isn't as bad when you have multiple threads trying to insert or update to the same table.
How should MSSQL be configured to support multiple concurrent access to the same table without excessive lock contention?
Viewing 3 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy