• Compression does not work well on any data stream < 150 bytes (+/-)

    For most techniques the first 100 bytes, or so, are used to store header information to reverse the compression.

    Given this ROW compression should be reserved for tables with long row datalengths.

    But before considering this approach there are several ways to reduce the 'byte-count' without compression (varchar instead of nvarchar etc). Compressing 'blob' data before it enters the database.

    As always, it's about getting extra row(s) into an individual page or reducing the number of pages taken by an single row.

    The biggest drawback here occurs during a page-split. The page needs to be reconstitued - every row expanded, the split made, then every row re-compressed The re-compression doesn't happen until the next write, so there could be some delay before you see your performance hit.

    PAGE compression on the other hand is a 'what have we got to loose' option. The 8K page is compressed (an awful lot of them) when it is written, and may never need to be expanded again.

    The page-split is still an issue, but an 8K decompress is a minor (and known) performance hit. You will run into performace problems if your tables/indexes have high page-split metrics.

    I've touched on it, but the important thing to note is that reading compressed data does not cause it to rebuild the 8K pages (MS doco). So you don't just save space on disk, you get more pages into memory, improving the performance of your buffer caching. So unless you write to a compressed page, your already ahead.

    A final note.

    The compression itself doesn't occur until the write-to-disk. So finding out if this is going cause grief can only be tested under (heavy) data-update load. Take any performance advice with a grain of salt, your situation will be different - and there is only one way for you to find out.