Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««123»»

SQL Server 2008 Compression Expand / Collapse
Author
Message
Posted Thursday, April 22, 2010 9:26 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Thursday, December 18, 2014 1:35 PM
Points: 361, Visits: 1,545
Jason Shadonix (4/22/2010)
ben.rosato (4/22/2010)
I have a table that's 10GB. There is a ntext column that holds the largest portion of the data. When I check the estimated compression for this table it only shows a 10MB reduction. That seems strange to me. Why would it not be able to compress that?



The only thing I see in BOL that might be a hint is the following:

"Because of their size, large-value data types are sometimes stored separately from the normal row data on special purpose pages. Data compression is not available for the data that is stored separately."





Yep, must be due to the ntext data type. I wonder if nvarchar(max) compresses.
Post #908734
Posted Thursday, April 22, 2010 9:32 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Wednesday, December 4, 2013 3:54 PM
Points: 67, Visits: 230
This is confusing: "As you can see from the disk bytes/sec counter, there is a considerable savings caused by both row and page level compression, with the page level compression yielding the highest improvement."

How is this? The number of bytes per second is much lower with compression on. What is the "considerable savings"? A lower number of bytes per second is a bad thing, not a good thing. Can you explain this please?

Finally, what is the "time" axis on all of the charts? What kind of time? The axis is not labeled with any units (minutes, seconds, hours...). Is this the time since the test started, and if so, how is that significant?

Thanks.
Post #908746
Posted Thursday, April 22, 2010 9:34 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Thursday, December 18, 2014 1:35 PM
Points: 361, Visits: 1,545
Compression doesn't appear to work on nvarchar(max).
Post #908751
Posted Thursday, April 22, 2010 10:32 AM


SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Monday, November 24, 2014 1:14 PM
Points: 78, Visits: 822
Hi,

If a very big table doesn't have a clustered index and I decide to compress this table, is it possible that this table become fragmented ?

calico
Post #908832
Posted Thursday, April 22, 2010 10:35 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Yesterday @ 9:02 AM
Points: 1,383, Visits: 1,754
David Walker-278941 (4/22/2010)
This is confusing: "As you can see from the disk bytes/sec counter, there is a considerable savings caused by both row and page level compression, with the page level compression yielding the highest improvement."

How is this? The number of bytes per second is much lower with compression on. What is the "considerable savings"? A lower number of bytes per second is a bad thing, not a good thing. Can you explain this please?


We're talking SQL Server performance, not disk performance. This is not how many bytes per second the disk array is capable of; this is how many bytes SQL Server had to read/write to disk during the testing -- and the less disk access needed, the better.

This is also why (I believe) the compression results in faster backup and access times -- the bottleneck in this particular test is the disk array.
Post #908833
Posted Thursday, April 22, 2010 3:45 PM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Today @ 2:39 PM
Points: 18,064, Visits: 16,108
This is a great article. Highly informative and is helpful in a decision I'm looking to make currently about compression.



Jason AKA CirqueDeSQLeil
I have given a name to my pain...
MCM SQL Server, MVP


SQL RNNR

Posting Performance Based Questions - Gail Shaw
Post #909056
Posted Thursday, April 22, 2010 3:57 PM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Wednesday, December 4, 2013 3:54 PM
Points: 67, Visits: 230
sknox (4/22/2010)
David Walker-278941 (4/22/2010)
This is confusing: "As you can see from the disk bytes/sec counter, there is a considerable savings caused by both row and page level compression, with the page level compression yielding the highest improvement."

How is this? The number of bytes per second is much lower with compression on. What is the "considerable savings"? A lower number of bytes per second is a bad thing, not a good thing. Can you explain this please?


We're talking SQL Server performance, not disk performance. This is not how many bytes per second the disk array is capable of; this is how many bytes SQL Server had to read/write to disk during the testing -- and the less disk access needed, the better.

This is also why (I believe) the compression results in faster backup and access times -- the bottleneck in this particular test is the disk array.


Ah, yes, that does make sense. Still, it would be useful to know what the "units" are on the Time scale.

I think that with a lot of SQL Server work, the disk subsystem is the bottleneck... not just in these tests.
Post #909063
Posted Friday, April 23, 2010 6:33 AM
SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Today @ 12:10 PM
Points: 1,544, Visits: 2,277
David Walker-278941 (4/22/2010)
[quote]sknox (4/22/2010)


Ah, yes, that does make sense. Still, it would be useful to know what the "units" are on the Time scale.

I think that with a lot of SQL Server work, the disk subsystem is the bottleneck... not just in these tests.


Sorry - just noticed the time question. The tests I did were just short 3 min. tests, and those "units" are just what MS Excel decided I wanted, and I didn't feel like fighting. Looks like they are about 15 sec. intervals.

Before you implemented anything, I'd suggest playing on a dev box with a copy of the database you are considering compressing with a more realistic load, and run tests for a lot longer.

We are just getting ready to run live with our first 2008 box (all 2005 otherwise) with a brand new app, so for now I'm just doing backup compression and not DB compression. From what I can tell it's just as easy to implement compression later as it is initially (keeping in mind a short burst of CPU/Disk use will probably be involved).


The Redneck DBA
Post #909385
Posted Friday, April 23, 2010 8:32 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Yesterday @ 9:02 AM
Points: 1,383, Visits: 1,754
David Walker-278941 (4/22/2010)
sknox (4/22/2010)

This is also why (I believe) the compression results in faster backup and access times -- the bottleneck in this particular test is the disk array.

I think that with a lot of SQL Server work, the disk subsystem is the bottleneck... not just in these tests.


Certainly, but as that's not always the case, and especially as encryption and compression have historically been compute-intensive tasks, I felt it was important to note that fact. As the disk subsystem is the bottleneck here, compression helps performance by decreasing reliance on that subsystem. But if you know that your disk subsystem has better performance characteristics -- or that your CPU has worse -- you should expect less dramatic results -- possibly even decreased performance in extreme cases.
Post #909530
Posted Friday, April 23, 2010 2:04 PM


SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Tuesday, November 25, 2014 1:49 PM
Points: 81, Visits: 372
Good article and glad to see numbers attached to it.

One question, though, when you apply row or page compression is the table (and its respective indexes) still accessible or does it maintain some sort of lock on it. Reason for question is that my production OLTP database is in a 24x7 environment and we had no maintenance window.


Mike Byrd
Post #909805
« Prev Topic | Next Topic »

Add to briefcase ««123»»

Permissions Expand / Collapse