Fastest Way to consume large XML files

  • Hi everyone

    I want to pick your brain on something.

    I need to create script that will import large XML files (500 - 7GB) on a daily basis and store the data in a relational db structure.

    What is the best and fastest way of importing such files. I have played around with smaller files and found the following.

    1. SSIS XML Data Source: It doesn't seem to like the complex elements types and throws out the file.

    2. Using Bulk File Import, sorting the file in XML variable and using XQuery to parse the file: This works but it can't take a file more than 2GB in size, so I can't use this method.

    3. C# + XML Serialization: This also works, but seems to be terribly slow. I open the DB connection once, so it doesn't open and close for each db call, but still seems like it takes a long time.

    Are there any other suggestions on how to import large XML quickly in a relational table structure?

    Thank you

  • C# XML serialisation should be quickest if implemented correctly.

    How have you implemented it? It sounds like you might be doing individual inserts for each row via a database connection embedded in the C# code? Try a different way. Implement it as an SSIS script task in a data flow and add rows to the output buffer within the read block of the XmlReader. That way, you're streaming the data into a bulk insert task, which is much more efficient.

    You could also initialise a bulk load from within .Net to keep it as a discrete application, but via SSIS is much easier.

  • Thanks Howard

    I am doing single record inserts for it, will combine it with a SSIS and see how it goes.

    Great advice, thanks!

  • how would I setup the C# code as a data flow source though?

  • There's a basic tutorial here:

    http://sql31.blogspot.co.uk/2013/03/how-to-use-script-component-as-data.html

    Basically, most of your code goes into CreateNewOutputRows() and you just call Output0Buffer.AddRow() as you iterate through the XMLReader...

  • Just a last question, I can see how this will work when the XML gets transformed into 1 table.

    If the record consists out of 80 odd tables, I will have to run the same processing script for each block of data and output each element to its corresponding table.

    Would you advise to perhaps use XML Serialization, then exporting each element to a flat file with all the PK/FK's included and then use a flat text source to import each.

    I am just worried about the overhead of running the XML process for each element, or will this be a non-issue?

  • SSIS source tasks support multiple output buffers. 80 tables sounds pretty extreme, but you could, in theory, add 80 output buffers into a data flow into 80 destinations with a single pass through the actual file!

    In practice, you're probably working at the extremes, so you should try both methods (and also spend time tuning the buffer size parameters). I would avoid making 80 passes of the file for obvious reasons.

  • It's an international standard for our trade, so we can't make changes to the structure.

    The suggestion you made sounds near perfect though, will implement it and play around with the buffer settings etc.

    Thanks again.

  • Quick thought, 80 outputs is quite a lot, it might be worth looking into XQuery in TSQL for shredding the XML.

    😎

    I have done few of these huge XML imports in the passed, one of the fastest method I've used is to bulk load the entire file, line by line, into a staging table and reconstruct the XML from there using FOR XML. Sounds daunting but actually is pretty quick. If I recall correctly, 4 core server, 8 Gb ram, SQL Server 2008 R2 did average around 5-6G of XML an hour.

  • I think you'd still have a 2GB max size that you can manipulate through XQuery, wouldn't you?

  • I looked at this option, but the max size is 2GB.

    We are working with files of up to 9GB

  • HowardW (7/28/2014)


    I think you'd still have a 2GB max size that you can manipulate through XQuery, wouldn't you?

    The way around it is to work with the XML in smaller parts as normally an XML of this size breaks down into sub nodes of a reasonable size. For instance, if the XML file contains a list of locations, one can normally reconstruct each location element. The good thing about the XML data type is that the content doesn't have to be well formed, i.e. doesn't matter if the root node is missing.

    😎

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply