• Phil,

    I would tend to agree with you, the the practice has been in place for a while now. The main reason is that they are using VM instances of SQL Server. All the new data is brought in on Server "A" and is physically seperated from Server "B". Once all the Crunching has taken place on "A's" Staging table. The data is then pushed via Data Flow Task to the Production Server "B". The Staging table there is then compared to the final landing table. Since the possibility exists for duplicates, the tables are compared at a row level and only new stuff gets through. Yes, it's a couple extra steps, but they're conservative with this and it works.

    Server "A" is located in house. Server "B" is located off site in a Co-location as part of the Disaster Recovery plan.

    Make Sense?

    Crusty.