August 22, 2011 at 11:35 am
Has anyone else experienced an extreme slow down to BCP queryout a table that has been updated? If I run it against a table that has only been Bulk Inserted, I get 300K+ per second, but after we add two columns (one int and one decimal(12,2)) and run an update process against the table, the same export limps to a crawl at around 10K per second.
The tables are right at about 100 Million rows with 25 columns wide.
BCP syntax is a simple "bcp "Select name, code, edit, charge, units from db1.dbo.table_C" queryout D:\C_OUT.txt -c -T"
Any help or suggestions would be greatly appreciated.
Thank you
August 22, 2011 at 11:49 am
Try adding a batch size or packet size.
August 22, 2011 at 12:14 pm
Yeah, I've tried both -b and -a to no avail. Also tried the Last Row functionality and no speed improvement.
Thanks
August 22, 2011 at 2:27 pm
I'm guessing the new columns( with defaults or not?) caused a lot of page splits of the data.
I'm thinking rebuilding the indexes on the table would improve the perforamnce afterwards.
Lowell
August 31, 2011 at 10:10 am
The page split data looks similar across tables that have been updated vs tables that have not. Rebuilding an index does not increase any performance.
When I create a new index, if I immediately run an export, it will run fast, but I think this is due to the table data in our 25 GB of cache, and not a valid test. After a short time or running other items on the server, it will go down to its regular slow export speed.
I appreciate the idea though. Thanks!
Viewing 5 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply