The batch size comes into play when loading the data into the database which is downstream from the issue I am highlighting.
The SqlBulkCopy.WriteToServer method (4 overloads) is how the class is told to begin loading the data to the database. The most common of the three overloads is the one that accepts an ADO.NET DataTable, the WriteToServer(DataTable) method
, and that DataTable by definition is a memory-resident object containing the data to load, the entire file in the most generic case. That is the most common way to leverage the SqlBulkCopy class and how the overwhelming majority of the basic internet tutorials show it.
The most interesting of the three overloads to me is the one that accepts an IDataReader
. The example on MSDN alludes to the original intent of this overload which was to enable table-to-table data copying by passing in a SQL ExecuteReader (executes a SELECT from one database to load into another), a very powerful and handy amethod indeed. With only a few lines of code we can copy all data from all tables in one database to a database with the same schema on another instance (database copy wizard anyone?).
That said, I would find it much more interesting if one could easily pass an IDataReader hooked up to a flat-file reader. Then you would have something analogous to a subset of the bcp.exe functionality written in .NET, but I have not tried it nor looked to see if anyone has gone there mainly because we have SSIS, bcp.exe, BULK INSERT and many other tools already capable of doing the task and I have never been boxed into an environment where I would need such a thing.
edit: excuse me, there are 4 WriteToServer overloads, not 3
__________________________________________________________________________________________________There are no special teachers of virtue, because virtue is taught by the whole community. --Plato