• LP:

    Correct: I had not thought of the XML solution and so that would be an incorrect statement that I made about single inserts being the “only” way to load data. I will see about updating the article to be more accurate in that regards. Thanks for pointing that out. Really.


    Brian:

    Regarding being able to do this in PHP. I have not had time to check out the PHP or Mono (for *NIX) documentation, but if it supports enough basic .Net functionality, then it is possible since this does not need to be done via a SQL Server specific library. I used the ADO.Net method but from what I see it can be via ODBC using “data-at-execution” (whatever that is) or via OLE DB using a custom IRowset implementation.


    Florian:

    Thanks. And yes, I will try the single instance of the variable. I could have sworn I started with that and got an error but maybe I am misremembering. I will try it again.


    Daniel and Oleg :

    Thanks for pointing out the misusage of the “using” block. I thought that I originally had it the way that both of you recommended but had someone tell me to change it to what is currently shown. It is possible that I misunderstood that person. In either case I did miss it on the SqlCommand so thanks also, Oleg, for pointing that out.


    Don:

    I am not sure that I understand. It seems that extracting, FTPing, BCPing, Transforming, and finally loading into the destination seems a bit more complex than simply reading the data from a local source, transforming each row as necessary, and sending it to the server. This is fewer steps, fewer processes, very little memory usage, and a single connection over the network to the DB as opposed to the two steps in your five step process (FTP and BCP). So in using FTP the disk on the SQL Server has to work one time to receive the data and write it locally, then twice more to read the local source and write it to a SQL Server datafile. Then the disk has to work a fourth time to move the data from the “staging” table into the final destination table. In the scenario I mentioned, assuming the same setup with two servers, the disk on the SQL Server would only work one time as it puts the data it receives from the application into the final destination table. This would be 25% the amount of disk I/O (in simplistic terms) as compared to the FTP / BCP / Transform method.


    Matt:

    1)Yes, you can transform the data before it goes into the DataTable when using SqlBulkCopy, but you still need to populate that DataTable in memory before you can send it to the destination. In doing so it is fully persisted in memory whereas in the solution that I am giving an example of, only a single row at a time is in memory.

    2)I am not advocating that this feature alone is a “compelling” reason to upgrade to SQL Server 2008. If you work somewhere that does not have it or produces software that needs to work on various versions of various databases then by all means do what will work in most / all situations. BUT, what I AM saying is that if you already have SQL Server 2008 in-house then why not use a feature that is available to be used? That would not incur any more cost.

    SQL#https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
    Sql Quantum Lifthttps://SqlQuantumLift.com/ ( company )
    Sql Quantum Leaphttps://SqlQuantumLeap.com/ ( blog )
    Info sitesCollations     •     Module Signing     •     SQLCLR