The process could not bulk copy out of table 'tblname'.

  • Hi,

    I am getting the following errror in the snapshot,

    The process could not bulk copy out of table 'tblname'.

    Till now it was working fine, but suddenly from 5 days it is showing the above error, so please let me know, will this hurt the replication process and how to solve the above error.

    With Regards

  • Have you checked of the replication share is reachable/ not out of space?

    Is there some detail info on the error?

  • We find nearly the same situation, we use merge replication and create snapshot for replication in ftp folder.

    We just change to SAN, database size is 70 GB.

    Snapshot fail and error is 'The process colud not bulk copy out of table 'xxx'.

    I check folder before it fail, it has some tables that bulk copy out there so the right should not be an issue.

    I try run separate bcp from that table to the same ftp sub folder and it run fine, but unfortunately it did not take this table.

    I also check the user that use for start SQL and SQL server agent, I found that user is Administrator of the box. Log on the box as this user and can add/delete any file in this folder.

    I use unc from my computer to check to ftp folder and it can access without any problem.

    I am also check disk space and we have available nore than 250 GB.

    I check current folder (the last one still there) found it take ~ 50 GB, the new folder before the job fail get to ~ 300 MB.

    I use SQL Profiler to try to catch any dead locks but can not find any.

    I run DBCC Check table for that table and DBCC CheckDB for that database, everything is fine.

    At the same time I have another database which size is ~ 30 GB and it can generate snapshot at same ftp folder (different sub folder) without any problem.

    I do not want to delete replication and create new replication due to it will interupt the business.

    I have a plan to copy out that table to the new name, delete old table and rename the new table to old table. But I need it to be the last paln due to I need to bring it out from replication article.

    Any help will be really appreciate.

  • found a nice site with replication faq

    http://www.replicationanswers.com/General.asp

    Any detailed info after failed (like I/O, ...)?

    What is the timeout on the snapshot agent?

  • - No other detail like I/O

    I change some parameter in Snapshot profile as follow.

    -BcpBatchSize 200000 (from 100000 default)

    -HistoryVerboseLevel 2 ( not change)

    -LoginTimeOut 50000 (from 15 default)

    -MaxBcpThreads 30 (from 1 default)

    -QueryTimeout 7000 ( from 300 default)

    Now it change to could not copy out from the other table at the exact same time ~ 1 hr 2-4 minutes.

    Normally it get ~300-400 MB now it get 5 GB before stop.

    I try to check all the deadlock again.

    Any clue?

  • Still can not find any dead lock.

    Put outputverbose 3 in, I am asloso change distribution profile to get timeout 300 (from 15) and query timeout from 300 to 3000.

    get the last piece as follow.

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Disconnecting from 'CorpServer1'

    Bulk copied snapshot data for article '[Check]' (0 rows).

    Repl Agent Status: 6

    Disconnecting from Publisher 'CorpServer1'

    I have talked to network team about what's happened and they did not believe that come from SAN problem.

    Anything else that I should do?

    Please advise.

    Thank you in advance.

  • I would lower the BcpBatchSize to 10000 or something. How many records are in the table?

    also have a look at another replication site I've found

    http://www.informit.com/articles/article.aspx?p=344817

  • Thank Jo,

    I tried as your recommend, it did not work.

    I checked deep in details in Event Viewer and I checked sysprocess and sp_who2 just before my job fail (run so many time until I know it).

    In event viewer I found:

    Process ID: 130 killed by hostname CORPSERVER, host process ID 2992.

    Yes SPID 130 is my job ( I have more than one due to set MaxBcpthreads = 40). All SPID for my jobs were kill by process ID 2992.

    I check process ID 2992 (before and after the job fail) and find it was belong to program name 'SQLAgent - Alert Engine'

    Before I ran the job, I killed it due to it came as SPID more than 50 but it came back, at the begiinig it also has SQLAgent - generic refresher, but SQLAgent - generic refresher did not come back after I killed it.

    I also disabled all alerts before I ran my job.

    Please also recommend me about Lock_timeout, will it help?

    I check and find it is '-1', so I reluctant to change it ( I do not know much about this but some paper show it is unlimit timeout)

    Any help or recommendation will be really appreciate.

  • Strange that an alerting engine would interfere with the replication jobs.

    Can you lower the MaxBcpthreads back to 1?

    I haven't much experience with replication troubleshooting.

    What is the current servicepack of both replication machines?

  • Thank Jo,

    Yes it is strange too much.

    By the way, I choose to reduce the database size by gradually reduce it from 70 GB to 30 GB. It will take ~ 10 days to handle this situation but it will be good for long run. I delete the image that keep in some table. So far I get free space ~ 5GB/day not so bad.

    I really want to keep MaxBcpthreads to more than 1 to get it run faster.

    For this company, this will be the best solution at this time, I do not want to do something that may interupt the business process. If I keep delete the image by the automatic job (Image has been kept in other place before delete from table), it may have no big space problem for more than 10 years(I hope).

    Joe, thank you again for try to help me out.

    Thank you so much.

  • Sorry for not completely answer your question.

    I use sql server 2000 sp 4 on windows 2003 standard edition sp 2.

    Snapshot folder on the this server, snapshot agent run on this server.

    Publisher and Distributor in this server.

    Subscriber use ExchangeType 1.

    Data and replication folder on SAN (different drive assigned).

    So pretty much everything is on this server.

    Have a great day.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply