Useful information in the article.
Compressed pages are persisted as compressed on disk and stay compressed when read into memory.
There is no in-memory, decompressed copy of the compressed page.
The savings in logical and physical I/O is largest when tables or indexes are scanned. When singleton lookups (for read or write) are performed, I/O savings from compression are smaller
As per my implementations on data compression in some of the large reporting environments where scans are inevitable compression do help. I see performance gain of as much as 20-30%. Take an example of the report pulling the records from the order and order details table where most of the orderIDs are joined thus they result in an index scan.
So the workload is an important factor here to understand whether compression will help or not.
A pure reporting workload where the OLTP and Reporting are separated typically does get the benefit of compression on a standard storage. Regarding the work load it is important to understand the U and S percentages.
U: Percent of Update Operations on the Object
S: Percent of Scan Operations on the Object
I feel if you look at the execution plan of the query that can tell you a lot as @TheSQLGuru said.
So next time when you test it, pls test it on a workload that is more towards S. Run some of the larger reports where scans happen.
I feel you will see performance benefit even if the snapshot is being turned on.