Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

A Collaborative Approach To Table Compression Expand / Collapse
Author
Message
Posted Wednesday, January 12, 2011 8:11 PM


SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: 2 days ago @ 7:31 AM
Points: 81, Visits: 371
Comments posted to this topic are about the item A Collaborative Approach To Table Compression

Mike Byrd
Post #1046958
Posted Wednesday, January 12, 2011 10:02 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, February 7, 2012 12:31 PM
Points: 6, Visits: 103
did you do any performance comparison between row/page compression? I, like you, thought about automating compression and I'm trying it out on a reporting environment. I put a ddl trigger in place that automatically compresses tables with page compression. Granted that might not be the "ideal" compression for a table, but its a start. Also, this server has 32 cpus and 256gb of ram and I am not strapped for CPU. I am more concerned with conserving space.

The server has thousands of very wide/large replicated tables so the compression helps a ton with disk space and performance. I never did a benchmark on row vs page though. And I dont compress indexes
Post #1046992
Posted Thursday, January 13, 2011 2:45 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Wednesday, January 8, 2014 4:58 AM
Points: 227, Visits: 268
I can recommend this document
http://msdn.microsoft.com/en-us/library/dd894051(v=sql.100).aspx
because there are a few good hints about when to use which compressiontype. I am sure you can write a script to do it automatically but if you don't have a few hundred tables up and running 24-7 you should do them manually. You won't do it that often to change the compression and you can keep an eye on the growth of the logfile, especially if you have mirroring and replication in place.

I haven't used many compressed tables in a live-system at the moment but my researches are that far that page-compression is pretty nice but much more expensive so I would only use it on archive-tables or tables where there is a big difference between page-compression and row-compression, as well as fact-tables in a BI-environment.
Post #1047087
Posted Thursday, January 13, 2011 2:53 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Today @ 1:24 AM
Points: 2,908, Visits: 1,833
I don't think it is worth compressing small lookup tables. In fact I noticed that a database containing nothing but small (<100 record) lookup tables actually grew.

Does the concept of an SGAM page exist for compressed data?


LinkedIn Profile
Newbie on www.simple-talk.com
Post #1047092
Posted Thursday, January 13, 2011 6:30 AM


SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: 2 days ago @ 7:31 AM
Points: 81, Visits: 371
Yes I did look at page compression, but the community consensus is that you should only do page compression for fairly static tables. If you have to update or insert a row in a page that is page compressed, then the CPU has to uncompress all the rows in that page and recompress the new page -- in an active system that would probably cost you performance. Page compression appears to be best utilized in lookup tables or tables with minimal write activity. Page compression would also be a player for a partitioned table where the "older" partitions could be page compressed (assuming low write activity) and the "newer" partitions could be row compressed. I've looked at this for several months and have yet to come up with a robust (read not fragile) means of maintaining the differentiation between older and newer partitions in a sliding window environment.

Mike Byrd
Post #1047213
Posted Thursday, January 13, 2011 6:34 AM


SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: 2 days ago @ 7:31 AM
Points: 81, Visits: 371
You are correct; compressing small lookup tables actually may increase the overall size. That is why I exclude any tables less than 8 pages in my script. I am not aware of an SGAM page for compressed data -- as far as I can tell all the compression algorithm data is embedded in the row (or page) itself.

Mike Byrd
Post #1047216
Posted Thursday, January 13, 2011 7:59 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Today @ 7:55 AM
Points: 17,822, Visits: 15,746
Interesting article. I had seen a couple of articles that showed little to no difference in CPU even for highly active tables and that overall page was faster. Now I will have to go back and test some more.



Jason AKA CirqueDeSQLeil
I have given a name to my pain...
MCM SQL Server, MVP


SQL RNNR

Posting Performance Based Questions - Gail Shaw
Post #1047306
Posted Thursday, January 13, 2011 8:58 AM


SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Monday, October 31, 2011 1:10 PM
Points: 480, Visits: 1,163
Mike,

Once the tables(rows) were compressed on your tables, how much space were you able to conserve? Do you have any estimates? Numbers would help on the usability of this functionality.

Thanks,
S


--
Post #1047366
Posted Thursday, January 13, 2011 10:57 AM
SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Today @ 8:06 AM
Points: 880, Visits: 2,434
I've done some compression experiments, and performance impact varies significantly.

As far as size, 8.1GB of data and 6.2GB of indexes (with plenty of VARCHAR fields, none of which had trailing whitespace) PAGE compressed down to 2.9GB of data and 2.5GB of indexes.

On a procedure that read from nested views leading to PAGE compressed data and indexes, and generated reporting tables with quite a few newly generated indexes based on those nested views, turning on PAGE compression with 20GB of RAM and 16 cores resulted in a somewhat longer duration, somewhat lower reads, significantly lower writes, and, of course, more CPU usage.

FYI: I've definitely seen REBUILDing a PAGE compressed index become CPU-bound on a 16-core box doing nothing else.
Post #1047457
Posted Thursday, January 13, 2011 12:30 PM


SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: 2 days ago @ 7:31 AM
Points: 81, Visits: 371
Unfortunately I do not have before and after data size numbers fro the row compression on our production OLTP database. But I did use (as documented in the script) sp_estimate_data_compression_savings stored procedure to identity potential tables elgible for row compression (you can also use it for page compression). Examples are indicated below (table names changed to protect the innocent). Of course, the amount of compression is very dependent on the table architecture and also the data within it. However, I was not looking so much for saving disk space as I was for decreasing disk IO (and the attendant performance gains associated with less IO). I guess I'll have to do a future article on potential performance gains from data compression; thanks for pointing this out.
Table IndexID PartitionNumber CurrentSize(kb) CompressedSize(kb) %Compression
AuditLog 1 1 68432 47240 31.0

Barcode 1 1 30112 20576 31.7
Barcode 1 2 656592 453528 30.9
Barcode 1 3 707480 488936 30.9
Barcode 1 4 683672 473000 30.8
Barcode 1 5 693344 481232 30.6
Barcode 1 6 755952 525368 30.5
Barcode 1 7 848928 589064 30.6
Barcode 1 8 231976 160808 30.7

Batch 1 1 65184 43872 32.7

USPSPackage 1 1 2705768 2059016 23.9


Mike Byrd
Post #1047513
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse