January 8, 2009 at 5:09 am
Hello,
If I have an XML hierarchy that "simulate" folders and files in the following way (Im not ready with the xml structure and have removed alot of attributes just to make it easy to se the structure)
In my area the folders and files will simulate Insurances and Claims, so the is Claims wich have 0-n documents.
what is the best way to let two different people get the folder tree with everything, and then be able to do update on two different subfolders without a lock of the whole xml row?
(in my world a Customer might have 400 claims and 10 concurrent users on the same XML folder (but working with different claims within the XML) )
I have an XSLT transformation in My SP so I can get a single part of the XML and update that, but Im not sure if that is enough or if I need to do other things to be sure that the concurrent users dont overwrite eachother things within the xmltree.
Best regards
Jonas
Edit: Noticed that I couldnt add XML tags to show structure so I removed those :/
January 8, 2009 at 11:22 am
Why are you storing and modifying large pieces of xml in a database? Why are these claims not in their own table?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 9, 2009 at 1:10 am
I never want to edit large part of the xml, only small parts of it, even though the total xml can be very large for some customers.
Im not sure if you mean to store claims in a traditional (relational) way with each attribute for the claim in an own column?
As what I have seen with my tests, my 1 Million customers who each have up to against 100 000 claims each (but maybe an average of 100-1000 claims) it is so much faster to work with an xml structure.
Even if I take an xml structure of 5Mb which is alot of data to transfer over the wire (so I dont take out such large chunk from database) but if I did, Once it is in the memory of the client, I can get any claim within those 10 000 claims below 1ms with Xpath.
If I instead have a relational table storing every billion of rows it takes me quite a bit longer to get the right claim even though I have index on the claimNo.
I guess I can increase the speed with heavier hardware but I still think that it will be very hard to get the same speed as I have now with only 1Million rows.
January 9, 2009 at 1:17 am
jonas_granlund (1/9/2009)
Im not sure if you mean to store claims in a traditional (relational) way with each attribute for the claim in an own column?
Absolutely. SQL is a relational database, not an xml store.
If I instead have a relational table storing every billion of rows it takes me quite a bit longer to get the right claim even though I have index on the claimNo.
Post your tests? There are very likely a number of things that can be done to speed things up. A billion rows is a lot, but it's no where near SQL's capacity. If I had to guess, it would either be due to a poor clustered index, or a too-narrow noncluster, or due to code that can't use index seeks
I guess I can increase the speed with heavier hardware but I still think that it will be very hard to get the same speed as I have now with only 1Million rows.
Throwing hardware at a performance problem often has little to no effect, unless the root cause was the hardware, the indexes were all optimal and the code perfect (which is seldom the case)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 9, 2009 at 3:06 am
I did quite a long list of answers but I received an error when posting and then I couldt get back to the things Ive wrote. :/
Anyway, I try again 🙂
I dont agree to that SQL server 2008 dont work as an XML Store, with the new enhancement in XQuery and XML Schema it is a really great XML Store.
I wouldnt be surpriced if the large contestor like EMCs XML Store use XML Columns from SQL server just like they use SQL Server behind the scenes on most of their Documentum implementations.
Regarding the tests, I dont have any numbers I can supply and what I did was to convert data from the Documentum SQL implementation and took it over to my tables, but I only took the relevant table and didnt convert +100 or so (overhead) tables 🙂
although I noticed that my executionplan showed that I used my indexes (I had different clustered indexes depending on search criteria) so I dont think I had any problem with those.
I have come up with a couple of problems with XML, like the one in this thread but once Ive worked around them I really think that this can be a great solution to my problems.
And I think that the group that work on SQL server will add a larger and larger crew that work on the Beyond relational part of XML, they had a great seminar on this area in the November PDC.
Best regards
Jonas
January 9, 2009 at 7:27 am
jonas_granlund (1/9/2009)
I dont agree to that SQL server 2008 dont work as an XML Store, with the new enhancement in XQuery and XML Schema it is a really great XML Store.
I didn't say it doesn't work as an XML store. I said it's a relational database that happens to store XML, not an engine built first and foremost for storing and processing XML.
although I noticed that my executionplan showed that I used my indexes (I had different clustered indexes depending on search criteria) so I dont think I had any problem with those.
Still doesn't mean they were optimal. If you're only taking about a clustered index, then SQL had to use it, because the clustered index replaces the table.
And I think that the group that work on SQL server will add a larger and larger crew that work on the Beyond relational part of XML, they had a great seminar on this area in the November PDC.
Uh-huh. It works. It's not the best way to use SQL, it doesn't take advantage of the power and capabilities of a relational engine (how do you enforce foreign keys or unique constraints within those claims?) and it's not a recommended design for a SQL database.
It's like buying a Ferrari and only driving it in 1st gear.
I'm probably not going to convince you of anything though.
The original problem you posted is non-existent in a normalised design. If you insist on using XML, look up the XML modification language in Books Online. It adds three keywords to xquery - delete, insert, replace value of. You would use the .modify method of the xml data to modify fragments of xml. It will however lock the entire row (at a minimum) as that's the lowest locking granularity in SQL.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 9, 2009 at 7:57 am
I agree with Gail.
Since the XML is stored as a single column it is treated just like any other column in the database. Therefore, you will have a row level lock when updating it. You cannot lock node(s) when manipulating the XML. To obtain this level of granularity, the XML will need to be "shredded" into a proper relational model and then reconstituted back as XML for the application.
Note: Other major database vendors may have this level of XML manipulation, but since this is a SQL Server forum, the answer is NO.
Viewing 7 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply