BCP Slow Export on Updated Large (100 Mill) Table

  • Has anyone else experienced an extreme slow down to BCP queryout a table that has been updated? If I run it against a table that has only been Bulk Inserted, I get 300K+ per second, but after we add two columns (one int and one decimal(12,2)) and run an update process against the table, the same export limps to a crawl at around 10K per second.

    The tables are right at about 100 Million rows with 25 columns wide.

    BCP syntax is a simple "bcp "Select name, code, edit, charge, units from db1.dbo.table_C" queryout D:\C_OUT.txt -c -T"

    Any help or suggestions would be greatly appreciated.

    Thank you

  • Try adding a batch size or packet size.

  • Yeah, I've tried both -b and -a to no avail. Also tried the Last Row functionality and no speed improvement.

    Thanks

  • I'm guessing the new columns( with defaults or not?) caused a lot of page splits of the data.

    I'm thinking rebuilding the indexes on the table would improve the perforamnce afterwards.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • The page split data looks similar across tables that have been updated vs tables that have not. Rebuilding an index does not increase any performance.

    When I create a new index, if I immediately run an export, it will run fast, but I think this is due to the table data in our 25 GB of cache, and not a valid test. After a short time or running other items on the server, it will go down to its regular slow export speed.

    I appreciate the idea though. Thanks!

Viewing 5 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply