• [p]Yes, a very interesting article which I enjoyed, and which has been quite an eye-opener for me.[/p]

    [p]You found that a deduplicated drive was still able to compress a compressed backup. This most likely suggests to me that Microsoft's compression algorithm was surprisingly weak (roughly 33% in your case) or that maybe the powerShell CmdLet was being optimistic, since you'd surely normally expect zero or negative effect from compressing something that's been compressed to the max, no matter the algorithm. The total saving was close to what I'd expect from a decent compressed backup from a third-party tool such as SQL Backup.[/p]

    [p]There has been a debate about the use of deduplication in Backup-As-A-Service (BaaS), The one core problem with deduplication [/url] 'Deduplication is about backup.It’s not about recovery.' though this doesn't apply to using a local SQL Server 2012- solution. Unless you're doing something very clever with deduplication, it doesn't help to minimise network traffic. (Source vs Target Rehydration). As I understand it, the act of reading the file from the deduplicated drive rehydrates it, so your archived backups will have to be copied across the network in their uncompressed state. Try doing that to and from cloud backup! Deduplication is not supported or recommended by Microsoft for live data, but is OK for local backups. However, I don't see an advantage over properly compressed backups and, like you, I can see the theoretical weakness of a single point of failure in the duplication algorithm though Scott M. Johnson and Microsoft Research seem to have minimized the risks.[/p]

    [p]I'm going to rush off and try a dedup drive, but your article leaves me thinking that this technology is great for document stores, logs, and other reasonably static data, but of less immediate interest to the DBA who already has compressed backups![/p]

    Best wishes,
    Phil Factor