We find nearly the same situation, we use merge replication and create snapshot for replication in ftp folder.
We just change to SAN, database size is 70 GB.
Snapshot fail and error is 'The process colud not bulk copy out of table 'xxx'.
I check folder before it fail, it has some tables that bulk copy out there so the right should not be an issue.
I try run separate bcp from that table to the same ftp sub folder and it run fine, but unfortunately it did not take this table.
I also check the user that use for start SQL and SQL server agent, I found that user is Administrator of the box. Log on the box as this user and can add/delete any file in this folder.
I use unc from my computer to check to ftp folder and it can access without any problem.
I am also check disk space and we have available nore than 250 GB.
I check current folder (the last one still there) found it take ~ 50 GB, the new folder before the job fail get to ~ 300 MB.
I use SQL Profiler to try to catch any dead locks but can not find any.
I run DBCC Check table for that table and DBCC CheckDB for that database, everything is fine.
At the same time I have another database which size is ~ 30 GB and it can generate snapshot at same ftp folder (different sub folder) without any problem.
I do not want to delete replication and create new replication due to it will interupt the business.
I have a plan to copy out that table to the new name, delete old table and rename the new table to old table. But I need it to be the last paln due to I need to bring it out from replication article.
Any help will be really appreciate.