Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 123»»»

XML Workshop XII - Parsing a delimited string Expand / Collapse
Author
Message
Posted Tuesday, December 4, 2007 10:29 PM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, December 24, 2013 4:42 AM
Points: 460, Visits: 2,523
Comments posted to this topic are about the item XML Workshop XII - Parsing a delimited string

.
Post #429626
Posted Wednesday, December 5, 2007 2:16 AM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Today @ 4:03 AM
Points: 587, Visits: 2,525
I really enjoy these XML workshops.

Of course, the parsing of delimited data is an important issue for data feeds. If data can be converted into XML first, using AWK or GREP or whatever, it then becomes much easier to gulp it into SQL Server. The biggest questions I've had are:
performance. Whenever I've used this XML technology for data feeds, or passing data between routines, it has been very fast, but others using a very similar system have reported it as being very slow. I'm not at all sure why the difference.
Resilience Maybe I've been unlucky, but I've had many feeds to deal with that occasionally spit out data that crashes the simplest, and most obvious, data import systems. I reckon that the best systems can isolate corrupt or incorrect data before the bulk insert and allow inspection of it by the DBA

Any thoughts?



Best wishes,

Phil Factor
Simple Talk
Post #429680
Posted Wednesday, December 5, 2007 2:24 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, December 24, 2013 4:42 AM
Points: 460, Visits: 2,523
I agree with you on the first point. Wherever I used the XML approach in our applications, it just worked well. However, I have read some posts where people complained about performance problems. I am not sure if it is because they are using it in an incorrect manner or something else. Or it could be the volume of data...i am not sure.

On the next point, do you think a schema could help?


jacob


.
Post #429684
Posted Wednesday, December 5, 2007 2:43 AM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Today @ 4:03 AM
Points: 587, Visits: 2,525
A typical problem I've had to face in the past might be that I've got a million or so rows of data from a switch that have to be imported. If they are not imported, then the business runs the risk of leaving a fraud or intrusion undetected. Right in the middle of the million rows is a record or two that is mangled. Rejecting the file isn't an option. The import routine needs to be able to import all the good stuff and leave the bad stuff in a 'limbo' file for manual intervention. Cleaning the data manually before import isn't a good idea either as such things usually are scheduled for the early hours of the morning when the server isn't so busy. Could a schema solve this sort of problem by filtering 'sheep-from-goats' on a record-by-record basis, rather than a document basis?

I'd love to know what causes slow XML processing but, like you, I'll have to wait until it happens to me! I ran some timings a while back with the various parameter-passing techniques and found XML to be as fast as the 'helper-table'/'number table' technique, which is far faster than any iterative technique.



Best wishes,

Phil Factor
Simple Talk
Post #429694
Posted Wednesday, December 5, 2007 3:00 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, December 24, 2013 4:42 AM
Points: 460, Visits: 2,523
I understand the problem now. A schema will validate an entire document. I do not think that a schema can be used to filter "bad records" and process the "good" ones. I guess the only option available is to query the XML data and retrieve the "good" records (and retrieve the "bad" onese and dumb to a table or XML file for manual review).

regards
Jacob


.
Post #429701
Posted Wednesday, December 5, 2007 6:09 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 3:28 PM
Points: 36,995, Visits: 31,521
Right in the middle of the million rows is a record or two that is mangled. Rejecting the file isn't an option. The import routine needs to be able to import all the good stuff and leave the bad stuff in a 'limbo' file for manual intervention.


BCP will do just that very nicely... second in speed only to Bulk Insert which does not have such a capability.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #429744
Posted Wednesday, December 5, 2007 6:10 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Monday, January 30, 2012 5:12 AM
Points: 40, Visits: 143
This method is good. But it adding 6 additional characters to the real data value. It means, suddenly your string will not fit into allocated number of characters. You have to be careful.
Post #429745
Posted Wednesday, December 5, 2007 6:14 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, December 24, 2013 4:42 AM
Points: 460, Visits: 2,523
I dont think this approach is good for Large Chunks of data. It is handy when you have a small piece of delimited string and you want to break it into a relational table quickly.

.
Post #429749
Posted Wednesday, December 5, 2007 6:17 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 12:31 PM
Points: 11,265, Visits: 13,027
I thought this was a very interesting article and certainly presented a new way to handle delimited strings without looping. For new applications I would just have the application pass XML as the parameter, but this is certainly a good way to handle existing applications and SSRS 2005 multi-select parameters.



Jack Corbett

Applications Developer

Don't let the good be the enemy of the best. -- Paul Fleming

Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
How to Post Performance Problems
Crosstabs and Pivots or How to turn rows into columns Part 1
Crosstabs and Pivots or How to turn rows into columns Part 2
Post #429751
Posted Wednesday, December 5, 2007 9:23 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, November 11, 2009 2:15 PM
Points: 1, Visits: 18
I agree it's an interesting "concept", but in the real world; where I have to process MILLIONS of records in a routine, I don't see it working....
Post #429834
« Prev Topic | Next Topic »

Add to briefcase 123»»»

Permissions Expand / Collapse