When disk I/O is no longer a problem, it comes to CPU and network I/O... To reduce the CPU load (and disk I/O) you could e.g. tune your queries / indexes, so expensive sorts occurse not so often. And / or you could let your application make the sorts on client side instead of querying a sorted result (of course this is nothing that could be implemented easy in a long existing application).
The saved disk space on compressed tables can be a benefit for it alone, most databases have a small percentage of hot data and a big, whole heap of (c)old data. Even if I use partitioning and save the hot data on FusionIO disks, why should I let the cold data take 150 instead of 20 GB space on my hard disks, even if this data will be seldom queried. And do not forget, that a DBCC CHECKDB, a backup, a restore always has to access ALL data in a database and in this case it will save a lot of I/O if the data is much smaller.
Regarding TDE: since SQL 2016 SP2 you can use backup compression together with TDE (the backup will read the data, decrypt it, compress it, encrypt it and writes it to the backup file). But only, if you set a MaxTransferSize of > 64k. But as mentioned above, TDE does not affect the PAGE / ROW compression of tables / indexes, since TDE kicks in on a deeper level (just before writing / after reading from disk, while the compressions happens when reading / writing to the Buffer Pool)
God is real, unless declared integer.