• A typical problem I've had to face in the past might be that I've got a million or so rows of data from a switch that have to be imported. If they are not imported, then the business runs the risk of leaving a fraud or intrusion undetected. Right in the middle of the million rows is a record or two that is mangled. Rejecting the file isn't an option. The import routine needs to be able to import all the good stuff and leave the bad stuff in a 'limbo' file for manual intervention. Cleaning the data manually before import isn't a good idea either as such things usually are scheduled for the early hours of the morning when the server isn't so busy. Could a schema solve this sort of problem by filtering 'sheep-from-goats' on a record-by-record basis, rather than a document basis?

    I'd love to know what causes slow XML processing but, like you, I'll have to wait until it happens to me! I ran some timings a while back with the various parameter-passing techniques and found XML to be as fast as the 'helper-table'/'number table' technique, which is far faster than any iterative technique.

    Best wishes,
    Phil Factor