• I get frustrated with the "use this free tool" approach. I'd pay for a tool that offered reliably high lossless compression capability.

    The other point I'd make is that people are so used to copying stuff around their local infrastructure that they get sloppy.

    If you were talking about synchronising DBs then rather than copying large backup files I'd look at log shipping the changes.

    At a file level perhaps we should have a facility that concentrates on keeping a catalogue of file timestamps and shifting that catalogue around. Concentrate on syncing the metadata not the data.

    If someone wants a particular file then the catalogue is checked to see if an update has taken place in the past 'x' minutes. If so sync the file, if not then skip to serve the file.