• I agree with everyone above - if at all possible, use SQL Server Enterprise edition data compression - you just need to be sure every (large) table has a clustered index, then ALTER INDEX ALL with DATA_COMPRESSION = PAGE.

    If you do use NTFS compression, use something like http://www.piriform.com/defraggler so you can see not only how many fragments each file in your filegroup is in, but how scattered they are across your disk (assuming contiguous, dedicated, unshared spindles at a SCSI/SAS/SATA/Virtual host/SAN level). NTFS compression is a fiend for fragmenting database files. Once they're read only, they shouldn't change anymore, but you may still want to defragment them, and again if you ever add data to them.

    The last NTFS compressed database files I saw were in more than a hundred thousand fragments on the disk for only a few tens of GB of data. Performance was absolutely terrible, unsurprisingly.