Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 

Experiments with Data Compression in SQL Server 2008

Data compression is an Enterprise Edition only feature that was added in SQL Server 2008. It allows you to use either Page or Row compression on the clustered index or any non-clustered indexes on a table. Data compression lets SQL Server trade off disk space and I/O pressure for some extra CPU pressure. It is best suited to relatively large tables that that are relatively static. Small tables that are highly volatile are not usually good candidates for compression (especially page compression).

I thought I would do a simple experiment to measure the effects of page compression on a table with 2.3 million rows of data. I created four identical tables with the schema shown below. The EventLog table has a non-compressed clustered index, the EventLogCompressed table has a compressed clustered index, the EventLogNew table has a non-compressed clustered index and a non-compressed non-clustered index, and the EventLogNewCompressed table has a compressed clustered index and a compressed non-clustered index. Each table has the exact same data. I wrote four identical stored procedures that hit each of the four tables.

This stored procedure causes a clustered index scan on the tables that don’t have a non-clustered index (EventLog and EventLogCompressed), and an index seek on the tables that have the non-clustered index (EventLogNew and EventLogNewCompressed). The tables with Compressed in their name have page compression on each index in the table. For this data, page compression compressed the clustered index by about 4.5 to 1.

Page compression reduced the logical reads from 23941 to 5545 for the clustered index scan and the query cost by 66%. It also reduced the memory consumed by the clustered index scan from 186MB to 43MB. In my mind, this means that having a compressed clustered index gives you some extra protection from the effects of an expensive clustered index scan. More details about these results are shown below:

-- Table with 2.3 million rows
CREATE TABLE [dbo].[EventLog](
    [EventID] [bigint] IDENTITY(1,1) NOT NULL,
    [UserID] [int] NOT NULL,
    [GroupID] [int] NULL,
    [PostID] [bigint] NULL,
    [FeedID] [int] NULL,
    [EventChange] [int] NOT NULL,
    [EventObject] [int] NOT NULL,
    [EventType] [varchar](50) NOT NULL,
    [EventSource] [int] NOT NULL,
    [DataField] [nvarchar](max) NULL,
    [ExtensionData] [xml] NULL,
    [CreateDate] [datetime] NOT NULL,
    [ParentEventID] [bigint] NULL,
    [Deleted] [bit] NOT NULL,
 CONSTRAINT [PK_EventLog] PRIMARY KEY CLUSTERED 
(
    [EventID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, 
 ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]


-- Non-clustered "covering index"
CREATE NONCLUSTERED INDEX [IX_EventLogNew_Cover1] ON [dbo].[EventLogNew] 
(
    [UserID] ASC,
    [Deleted] ASC,
    [CreateDate] ASC
)
INCLUDE ( [EventID],
[GroupID],
[PostID],
[FeedID],
[EventChange],
[EventObject],
[EventType],
[EventSource]) WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, 
SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, 
ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
GO


-- Return IO Statistics
SET STATISTICS IO ON;

-- Clear out the procedure cache for instance    
DBCC FREEPROCCACHE;
    
-- Don't do this on a production server!
DBCC DROPCLEANBUFFERS;

-- Run four identical stored procedures
-- against identical tables    (except for index changes)
PRINT 'Query 1'; -- Clustered Index Scan
EXEC dbo.GetEventLogByUserID 137926;
PRINT 'Query 2'; -- Clustered Index Scan (compressed index)
EXEC dbo.GetEventLogCompressedByUserID 137926;
PRINT 'Query 3'; -- Index seek
EXEC dbo.GetEventLogNewByUserID 137926;
PRINT 'Query 4'; -- Index seek (compressed index)
EXEC dbo.GetEventLogNewCompressedByUserID 137926;

-- Statistics IO output
--Query 1 Clustered index scan against EventLog table
--Table 'EventLog'. Scan count 3, logical reads 23941, physical reads 176, 
--read-ahead reads 23858, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

--Query 2 Clustered index scan against EventLogCompressed table
--Table 'EventLogCompressed'. Scan count 3, logical reads 5545, physical reads 45, 
--read-ahead reads 5509, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

--Query 3 Index seek against EventLogNew
--Table 'EventLogNew'. Scan count 1, logical reads 3, physical reads 3, 
--read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

--Query 4 Index seek against EventLogNewCompressed
--Table 'EventLogNewCompressed'. Scan count 1, logical reads 3, physical reads 2, 
--read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


-- Breaks down buffers used by current database 
-- by object (table, index) in the buffer cache
SELECT OBJECT_NAME(p.[object_id]) AS [ObjectName], 
p.index_id, COUNT(*)/128 AS [buffer size(MB)],  
COUNT(*) AS [buffer_count] 
FROM sys.allocation_units AS a
INNER JOIN sys.dm_os_buffer_descriptors AS b
ON a.allocation_unit_id = b.allocation_unit_id
INNER JOIN sys.partitions AS p
ON a.container_id = p.hobt_id
WHERE b.database_id = DB_ID()
AND p.[object_id] > 100
GROUP BY p.[object_id], p.index_id
ORDER BY buffer_count DESC;

-- Buffer space results
--ObjectName                index_id    buffer size(MB)    buffer_count
--EventLog                    1            186                23855
--EventLogCompressed          1             43                 5508
--EventLogNew                 5              0                   24
--EventLogNewCompressed       2              0                   16

Comments

Posted by Steve Jones on 22 February 2010

That's interesting. Did it change query time substantially? Did CPU raise up?

Posted by talltop on 26 February 2010

Good read. Also, has anyone noticed that TDE db encryption almost nullifies the compression on backups?

Posted by Glenn Berry on 26 February 2010

In these experiments, the query time was reduced in Query 2 (when the clustered index was compressed) compared to Query 1. CPU utilization did not go up by much, but I dis not measure it precisely, I was just watching in Task Manager.

I will re-run the experiment and measure it in PerfMon this time.

Posted by Nicholas Cain on 26 February 2010

Talltop: there are algorithms that look for patterns in the data that they can use to perform the compression. When you enable TDE and encrypt the data you eliminate any patterns and so you won't get backup compression.

Posted by mikebullen on 2 December 2010

Is there a way to encrypt AZURE SQL data? TDE not supported.

Leave a Comment

Please register or log in to leave a comment.