• I don't want this to sound like a flame, but if you haven't seen a speed difference between a DTS data pump task and a bulk insert then either you haven't tried to copy a large enough table or you're not meeting the criteria for minimally-logged bulk copy.  There are limitations on database recovery mode and destination table indexes and triggers that can prevent a bulk insert from running as fast as possible.

    I have to transfer data back and forth to a Teradata system, and the DTS data pump task is so slow it is unusable for all but the smallest datasets.  This is a combination of bad Teradata drivers (IMHO), and a slow VPN connection where the latency affects the row-by-row transfer handshaking.  Teradata has their own data transfer utilities that collect rows into large blocks for network transfer, use multiple connections in parallel, and use bulk copy API commands to get the data into SQL Server.  These utilities are a pain to set up and much more limited than DTS, but are many times faster.  This may be a worst-case scenerio (at least the worst one I've had to deal with), but the speed difference is huge.  It's quite possible the Oracle-SQL connection works better than the Teradata-SQL connection, but I haven't used it myself.

    I also have to work with remote SQL Servers, also over a VPN but one with less latency.  I use DTS packages extensively to move data back and forth, but at some point in the 10MB-100MB range it becomes worthwhile to switch to a bcp out - zip - ftp - bcp in process.  If the file is big enough to notice the difference between FTP and Windows Explorer file copy, you should notice a speed improvement with bulk insert.

    There is also the issue of the performance impact on the Oracle server.  The 3-hour data transfer is using resources on their system the whole time.  A data export would finish very quickly, the subsequent copy and import operations would be completly independent of the Oracle database.