awesome article , however, I have a different scenario. What about the source and target tables are really large?
close to 1 billion records? I am not sure if it's realistic to go through each record using script component to
calcuate the HASH value. Also in lookup component, just full cache on even two columns of the target tables would pretty
much suck most of the memory as it contains over 1 billions records.
Any comments on large tables?
thanks
Hui