Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

The process could not bulk copy out of table 'tblname'. Expand / Collapse
Author
Message
Posted Monday, July 21, 2008 4:00 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Tuesday, November 18, 2014 5:35 AM
Points: 330, Visits: 978
Hi,

I am getting the following errror in the snapshot,

The process could not bulk copy out of table 'tblname'.

Till now it was working fine, but suddenly from 5 days it is showing the above error, so please let me know, will this hurt the replication process and how to solve the above error.

With Regards
Post #537506
Posted Monday, July 21, 2008 5:46 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 1:45 PM
Points: 1,399, Visits: 6,781
Have you checked of the replication share is reachable/ not out of space?
Is there some detail info on the error?
Post #537583
Posted Friday, November 7, 2008 9:01 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Friday, August 6, 2010 2:38 PM
Points: 79, Visits: 488
We find nearly the same situation, we use merge replication and create snapshot for replication in ftp folder.
We just change to SAN, database size is 70 GB.
Snapshot fail and error is 'The process colud not bulk copy out of table 'xxx'.
I check folder before it fail, it has some tables that bulk copy out there so the right should not be an issue.
I try run separate bcp from that table to the same ftp sub folder and it run fine, but unfortunately it did not take this table.

I also check the user that use for start SQL and SQL server agent, I found that user is Administrator of the box. Log on the box as this user and can add/delete any file in this folder.

I use unc from my computer to check to ftp folder and it can access without any problem.
I am also check disk space and we have available nore than 250 GB.
I check current folder (the last one still there) found it take ~ 50 GB, the new folder before the job fail get to ~ 300 MB.

I use SQL Profiler to try to catch any dead locks but can not find any.

I run DBCC Check table for that table and DBCC CheckDB for that database, everything is fine.
At the same time I have another database which size is ~ 30 GB and it can generate snapshot at same ftp folder (different sub folder) without any problem.

I do not want to delete replication and create new replication due to it will interupt the business.
I have a plan to copy out that table to the new name, delete old table and rename the new table to old table. But I need it to be the last paln due to I need to bring it out from replication article.


Any help will be really appreciate.

Post #599040
Posted Saturday, November 8, 2008 7:11 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 1:45 PM
Points: 1,399, Visits: 6,781
found a nice site with replication faq
http://www.replicationanswers.com/General.asp

Any detailed info after failed (like I/O, ...)?
What is the timeout on the snapshot agent?
Post #599436
Posted Monday, November 10, 2008 7:21 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Friday, August 6, 2010 2:38 PM
Points: 79, Visits: 488
- No other detail like I/O
I change some parameter in Snapshot profile as follow.
-BcpBatchSize 200000 (from 100000 default)
-HistoryVerboseLevel 2 ( not change)
-LoginTimeOut 50000 (from 15 default)
-MaxBcpThreads 30 (from 1 default)
-QueryTimeout 7000 ( from 300 default)

Now it change to could not copy out from the other table at the exact same time ~ 1 hr 2-4 minutes.
Normally it get ~300-400 MB now it get 5 GB before stop.
I try to check all the deadlock again.

Any clue?
Post #599855
Posted Tuesday, November 11, 2008 6:59 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Friday, August 6, 2010 2:38 PM
Points: 79, Visits: 488
Still can not find any dead lock.
Put outputverbose 3 in, I am asloso change distribution profile to get timeout 300 (from 15) and query timeout from 300 to 3000.
get the last piece as follow.
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Disconnecting from 'CorpServer1'
Bulk copied snapshot data for article '[Check]' (0 rows).
Repl Agent Status: 6
Disconnecting from Publisher 'CorpServer1'

I have talked to network team about what's happened and they did not believe that come from SAN problem.

Anything else that I should do?
Please advise.
Thank you in advance.
Post #600635
Posted Tuesday, November 11, 2008 9:30 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 1:45 PM
Points: 1,399, Visits: 6,781
I would lower the BcpBatchSize to 10000 or something. How many records are in the table?

also have a look at another replication site I've found
http://www.informit.com/articles/article.aspx?p=344817
Post #600758
Posted Thursday, November 13, 2008 2:50 PM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Friday, August 6, 2010 2:38 PM
Points: 79, Visits: 488
Thank Jo,
I tried as your recommend, it did not work.
I checked deep in details in Event Viewer and I checked sysprocess and sp_who2 just before my job fail (run so many time until I know it).
In event viewer I found:
Process ID: 130 killed by hostname CORPSERVER, host process ID 2992.
Yes SPID 130 is my job ( I have more than one due to set MaxBcpthreads = 40). All SPID for my jobs were kill by process ID 2992.

I check process ID 2992 (before and after the job fail) and find it was belong to program name 'SQLAgent - Alert Engine'

Before I ran the job, I killed it due to it came as SPID more than 50 but it came back, at the begiinig it also has SQLAgent - generic refresher, but SQLAgent - generic refresher did not come back after I killed it.
I also disabled all alerts before I ran my job.

Please also recommend me about Lock_timeout, will it help?
I check and find it is '-1', so I reluctant to change it ( I do not know much about this but some paper show it is unlimit timeout)

Any help or recommendation will be really appreciate.


Post #602424
Posted Saturday, November 15, 2008 6:27 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 1:45 PM
Points: 1,399, Visits: 6,781
Strange that an alerting engine would interfere with the replication jobs.
Can you lower the MaxBcpthreads back to 1?

I haven't much experience with replication troubleshooting.
What is the current servicepack of both replication machines?
Post #603216
Posted Monday, November 17, 2008 7:41 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Friday, August 6, 2010 2:38 PM
Points: 79, Visits: 488
Thank Jo,
Yes it is strange too much.
By the way, I choose to reduce the database size by gradually reduce it from 70 GB to 30 GB. It will take ~ 10 days to handle this situation but it will be good for long run. I delete the image that keep in some table. So far I get free space ~ 5GB/day not so bad.
I really want to keep MaxBcpthreads to more than 1 to get it run faster.
For this company, this will be the best solution at this time, I do not want to do something that may interupt the business process. If I keep delete the image by the automatic job (Image has been kept in other place before delete from table), it may have no big space problem for more than 10 years(I hope).

Joe, thank you again for try to help me out.

Thank you so much.


Post #603669
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse