July 2, 2012 at 9:25 am
When I am bulk copying data out from a database I have changed the bulk copy batch size from the default of 1000 to 50000 using the -b option. But it seems to ignore this request. Could this be a setting on the database server side? Or have I misunderstood the option. I come from a sybase background and expected the -b option to be the same.
bcp DBName.dbo.TableName out C:\temp\file.txt -c -t ... -b 5000
the -b option seems to make no difference and the default of 1000 rows is used.
July 2, 2012 at 11:21 am
From BOL:
-bbatch_size
Specifies the number of rows per batch of imported data. Each batch is imported and logged as a separate transaction that imports the whole batch before being committed. By default, all the rows in the data file are imported as one batch. To distribute the rows among multiple batches, specify a batch_size that is smaller than the number of rows in the data file. If the transaction for any batch fails, only insertions from the current batch are rolled back. Batches already imported by committed transactions are unaffected by a later failure.
Based on this, this switch is only used on input, not output.
Viewing 2 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply