Process to Compress Existing Table and it's Indexes

  • Hello,
    I have a number of tables in a database, none of which are compressed, some very large. What is one of the better methods to implement compress on these tables?
        I do this on Teradata, it's pretty standard:
        CopyTable to _Backup
        Grab DDL on Orig Prod Table and name _TEMP
        Mod in compression on the tables columns accordingly
        Create the _TEMP
        Insert/ Select Orig Pro table into _Temp
        Drop Prod
        Rename _Temp to orig Prod name
    What would be the process on SQL Server via SSMS, and also compress the Index?
    Thanks,
    JPQ

  • you can script the statements to setup compression like this:


    SELECT 'ALTER TABLE [' + s.name + '].[' + t.name + '] REBUILD WITH (DATA_COMPRESSION = PAGE)'
      FROM sys.tables t
        INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
        INNER JOIN sys.partitions p ON t.object_id = p.object_id
        INNER JOIN sys.dm_db_partition_stats ps ON t.object_id = ps.object_id
      WHERE p.rows > 100
        AND p.data_compression_desc = 'NONE'
      GROUP BY s.name, t.name
      ORDER BY s.name, t.name

    SELECT 'ALTER INDEX [' + i.name + '] ON [' + s.name + '].[' + t.name + '] REBUILD WITH (DATA_COMPRESSION = PAGE)'
      FROM sys.indexes i
        INNER JOIN sys.tables t ON i.object_id = t.object_id
        INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
        INNER JOIN sys.partitions p ON t.object_id = p.object_id
        INNER JOIN sys.dm_db_partition_stats ps ON i.object_id = ps.object_id AND i.index_id = ps.index_id
      WHERE p.rows > 100
        AND p.data_compression_desc = 'NONE'
      GROUP BY s.name, t.name, i.name
      ORDER BY s.name, t.name, i.name

    add WHERE clause conditions for the table name(s) you want.

  • dont blindly compress tables and indexes, especially tables that are write intensive. It's more suitable for static tables

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle - Wednesday, March 1, 2017 4:10 AM

    dont blindly compress tables and indexes, especially tables that are write intensive. It's more suitable for static tables

    I agree completely that blindly compressing, or blindly leaving uncompressed, or blindly uncompressing is not ideal.

    I disagree that it's more suitable for static tables and updating tables - that depends very much on much how much compression you get (put a few CHAR(1000) columns in that are nearly 100% compressible, for example) per page, how much CPU power you have available, how much drive space and RAM you have (since compressed pages stay in the buffer as compressed pages), and so on and so forth.  

    I consider it very much an "it depends" answer, though I find that in most environments I deal with, which are IO heavy and CPU light, compressing everything first and looking for what to uncompress later has quite a high net benefit.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply