• Greetings,

    Thanks for you all of your support and thoughts on the matter. I've spent quite a few hours putting together format files, testing them and getting really pi#$ed.

    Using the bcp to write the format file using the -n switch does not give the required prefix, so I ended up using a combination of the automatically generated as well as the format file that is created when you try to do a load where the data types are not specified (this gives a prefix). Took both of these fmt files and merged them to get what I'd consider a good format file.

    Out of the 26 tables that I have to do this on, it worked on 24 tables. The only common thread that I can find is that these two tables have over 170 columns. The BULK INSERT and bcp both blow up, saying that the field length is too long for column 1 (which is a SQLINT 4). Rather than try to figure out how to cut down on the column load (not practical anyway) to find the breaking point, I decided on an alternate loading mechanism.

    Ultimately, I've ended up putting together a generic CSV to XML translator and then used SQLXML 3.0 bulk load to force the data in. Decided against using .NET datasets, due to the slow nature of the load.

    Would be interested in hearing any suggestions on the XML load speed issue (50,000 rows usually for each table)

    Relationship with the vendor? Let's put it this way, they are using MDBS Titanium 6.1f (www.mdbs.com), navigational model, as the datastore and generally are not very responsive to requests such as this. The only saving grace is that they will be moving to mySQL in an upcoming release (please insert bashing here), so I'll be able to reach in and get what I need when needed.

    Many thanks for all your help.