• The key point being to build the temp table, i.e, the reduced size result set on the remote box then reference that table. Still problematic if the results are driven by elements locally. meaning you don't really know what to pull remotely until you join to the local table. You are right about reduced column sets. But then you are limiting things too, and at that point you really are talking about the actual functionality required.

    I think what it really means is that there is not a really good heterogeneous join system out there, or at least not in SQL*Server though it is probably good enough for most things. You could design a minimal set of data transfer that is pretty close to the least amount of time, space, data transfer possible. But, it gets hard to balance lots of divergent requests. It is like a human has to be the cost optimizer because the built in optimizer thinks fairly simply. Yet the fix could be rule based and so dynamic sql could build itself.

    Is there a contest out there to build the fastest, least packet hungry heterogeneous (throw in most secure) join across some well defined set of hardware/software? and if so, what is the prize? I propose the solution expands bandwidth on the internet as well as moves data across it. Which after all is sort of the same thing...