SQL Compression

  • I have a server with about 20-30 databases and none have compression.

    I want script out all existing tables and indexes from source databases and add compression in the script, and then execute the script on the destination (blank) database.  I then want to import(insert/copy) to the destination db/tables.  I want to run this 1 db at a time of course.  I will then delete the source db.  Has anyone done anything like this before?

    Thanks!

  • depending on the size of those db's and also no how many have  "MAX" datatypes the following would be an option

    restore the current db onto a new name.

    add a new filegroup

    USE [master]

    BACKUP DATABASE [test] TO DISK = N'C:\sql_server_data\MSSQL12.SQL2014\MSSQL\Backup\test1.logx' WITH COPY_ONLY, NOFORMAT, init
    , NAME = N'test-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
    GO

    USE [master]
    RESTORE DATABASE [test_compress] FROM DISK = N'C:\sql_server_data\MSSQL12.SQL2014\MSSQL\Backup\test1.logx' WITH FILE = 1,
    MOVE N'test' TO N'C:\sql_server_data\MSSQL12.SQL2014\MSSQL\DATA\test_compress.mdf',
    MOVE N'test_log' TO N'C:\sql_server_data\MSSQL12.SQL2014\MSSQL\DATA\test_compress_log.ldf', NOUNLOAD, STATS = 5

    GO


    USE [master]
    GO
    ALTER DATABASE [test_compress] ADD FILEGROUP [compress_fg]
    GO
    ALTER DATABASE [test_compress] ADD FILE ( NAME = N'compressed', FILENAME = N'C:\sql_server_data\MSSQL12.SQL2014\MSSQL\DATA\compressed.ndf'
    , SIZE = 51200KB , FILEGROWTH = 512000KB ) TO FILEGROUP [compress_fg]
    GO

    then execute following SQL - as I said it will not work well with MAX datatypes as the lob data will be left behind on the original filegroup - there is a way to do it but scripting is a bit harder as it required creating partition schemes and moving those tables to them.

    declare @Sql nvarchar(max)

    declare fcur cursor for
    select case
    -- if heap we need to create a clustered index on it on desired fg and drop it after its creation
    when ix1.type_desc = 'heap'
    then 'create clustered index [xxx_' + ob1.name + '] on '
    + quotename(sc1.name) + '.' + quotename(ob1.name) + ' (' + heapcol.columnname + ') with (data_compression = page, sort_in_tempdb = on) on [compress_fg];' -- if tempdb is restricted in space remove the sort_in_tempdb
    + ' drop index [xxx_' + ob1.name + '] on ' + quotename(sc1.name) + '.' + quotename(ob1.name)
    when ix1.type_desc = 'clustered'
    then 'create '
    +
    case
    when ix1.is_unique = 1
    then ' unique '
    else ''
    end
    + ' clustered index ' + quotename(ix1.name) + ' on '
    + quotename(sc1.name) + '.' + quotename(ob1.name) + ' (' + indexcols.columnname + ') with (data_compression = page, drop_existing = on, sort_in_tempdb = on) on [compress_fg];' -- if tempdb is restricted in space remove the sort_in_tempdb
    else 'create '
    +
    case
    when ix1.is_unique = 1
    then ' unique '
    else ''
    end
    + ' index ' + quotename(ix1.name) + ' on '
    + quotename(sc1.name) + '.' + quotename(ob1.name) + ' (' + indexcols.columnname + ') with (data_compression = page, drop_existing = on, sort_in_tempdb = on) on [compress_fg];' -- if tempdb is restricted in space remove the sort_in_tempdb
    end
    from sys.objects ob1
    inner join sys.schemas sc1
    on sc1.schema_id = ob1.schema_id
    inner join sys.indexes ix1
    on ix1.object_id = ob1.object_id
    and is_ms_shipped = 0
    outer apply (select top 1 quotename(co1.name) as columnname
    from sys.all_columns co1
    inner join sys.types ty1
    on ty1.system_type_id = co1.system_type_id
    and ty1.user_type_id = co1.user_type_id
    and ty1.name not in ('xml')
    and co1.object_id = ob1.object_id
    order by co1.column_id
    ) heapcol
    outer apply (select stuff(
    (select ',' + co1.name
    from sys.all_columns co1
    inner join sys.index_columns ic1
    on ic1.index_id = ix1.index_id
    and ic1.column_id = co1.column_id
    and co1.object_id = ic1.object_id
    and co1.object_id = ob1.object_id
    order by ic1.index_column_id
    for xml path ('')
    ), 1, 1, '') as columnname
    ) indexcols
    order by ob1.name
    , ix1.index_id
    ;
    open fcur

    fetch next from fcur into @Sql

    WHILE @@FETCH_STATUS = 0
    BEGIN
    print @Sql
    exec(@Sql)

    FETCH NEXT FROM fcur INTO @Sql

    END

    close fcur

    /* at this point data is all on new FG
    shrink the original FG datafiles and then move data back to it - this is to recover the space saved by the compression of objects
    */


    -- process cursor again - this time replace filegroup with primary
    open fcur

    fetch next from fcur into @Sql

    WHILE @@FETCH_STATUS = 0
    BEGIN
    print @Sql
    set @Sql = replace(@Sql, '[compress_fg]', '[PRIMARY]')
    exec(@Sql)

    FETCH NEXT FROM fcur INTO @Sql

    END

    close fcur
    deallocate fcur

    -- and finally drop the FG created for this process

     

  • Sharing here the T-SQL script to copy data from one database to another database. It will copy data from one database to another database between the tables having exact same schema and having at least one row.

    You can use and further modify it if you wish to.

     

    /*
    Author : Brahmanand Shukla
    Date : 28-Oct-2019
    Purpose : T-SQL query to copy the rows of all the tables from one database to another database
    only if they have rows in Source DB and exact same Schema in both Source and Destination DB.
    */

    ; WITH cte_All_Tables_With_Foreign_Key
    -- Get all the tables (from Destination DB) having foreign key. Ignore the self-referencing.
    AS
    (
    SELECT PAR_SCH.[name] AS [Parent_Schema_Name]
    , PAR_TAB.[name] AS [Parent_Table_Name]
    , REF_SCH.[name] AS [Referenced_Schema_Name]
    , REF_TAB.[name] AS [Referenced_Table_Name]
    , FK.[parent_object_id] AS [parent_object_id]
    , FK.[referenced_object_id] AS [referenced_object_id]
    FROM destination.sys.foreign_keys FK
    INNER JOIN destination.sys.tables PAR_TAB
    ON PAR_TAB.[object_id] = FK.[parent_object_id]
    INNER JOIN destination.sys.schemas PAR_SCH
    ON PAR_SCH.[schema_id] = PAR_TAB.[schema_id]
    INNER JOIN destination.sys.tables REF_TAB
    ON REF_TAB.[object_id] = FK.[referenced_object_id]
    INNER JOIN destination.sys.schemas REF_SCH
    ON REF_SCH.[schema_id] = REF_TAB.[schema_id]
    WHERE FK.[type] = 'F'
    AND FK.[parent_object_id] <> [referenced_object_id]
    AND PAR_TAB.type = 'U'
    AND REF_TAB.type = 'U'
    )
    , cte_Find_All_Referenced_Tables_In_Sequence
    /*
    Recursive CTE :

    Find the sequence of each referenced table.
    For e.g Table1 is referenced with Table2 and Table2 is referenced with Table3
    then Table3 should be assigned Sequence as 1,
    Table2 should be assigned Sequence as 2
    and Table1 should be assigned Sequence as 3
    */
    AS
    (
    SELECT FK1.[Parent_Schema_Name]
    , FK1.[Parent_Table_Name]
    , FK1.[Referenced_Schema_Name]
    , FK1.[Referenced_Table_Name]
    , FK1.[parent_object_id]
    , FK1.[referenced_object_id]
    , 1 AS [Iteration_Sequence_No]
    FROM cte_All_Tables_With_Foreign_Key FK1
    LEFT JOIN cte_All_Tables_With_Foreign_Key FK2
    ON FK1.[parent_object_id] = FK2.[referenced_object_id]
    WHERE FK2.[parent_object_id] IS NULL
    UNION ALL
    SELECT FK.[Parent_Schema_Name]
    , FK.[Parent_Table_Name]
    , FK.[Referenced_Schema_Name]
    , FK.[Referenced_Table_Name]
    , FK.[parent_object_id]
    , FK.[referenced_object_id]
    , CTE.[Iteration_Sequence_No] + 1 AS [Iteration_Sequence_No]
    FROM cte_All_Tables_With_Foreign_Key FK
    INNER JOIN cte_Find_All_Referenced_Tables_In_Sequence CTE
    ON FK.[parent_object_id] = CTE.[referenced_object_id]
    WHERE FK.[referenced_object_id] <> CTE.[parent_object_id]
    )
    , cte_All_Tables
    /*
    Merge all tables (such as Tables with Foreign Key and Tables without Foreign Key) along with their sort order.
    Tables without Foreign Key and Tables with root level Foreign Key should be given the least sort order
    */
    AS
    (
    SELECT [Referenced_Schema_Name] AS [TABLE_SCHEMA]
    , [Referenced_Table_Name] AS [TABLE_NAME]
    , 1 AS [SORT_ORDER]
    , [Iteration_Sequence_No]
    FROM cte_Find_All_Referenced_Tables_In_Sequence
    UNION
    SELECT STGA.[Parent_Schema_Name] AS [TABLE_SCHEMA]
    , STGA.[Parent_Table_Name] AS [TABLE_NAME]
    , 2 AS [SORT_ORDER]
    , 0 AS [Iteration_Sequence_No]
    FROM cte_Find_All_Referenced_Tables_In_Sequence STGA
    LEFT JOIN cte_Find_All_Referenced_Tables_In_Sequence STGB
    ON STGB.[referenced_object_id] = STGA.[parent_object_id]
    WHERE STGB.[Iteration_Sequence_No] IS NULL
    UNION ALL
    SELECT SCH.[name] AS [TABLE_SCHEMA]
    , TAB.[name] AS [TABLE_NAME]
    , 3 AS [SORT_ORDER]
    , 0 AS [Iteration_Sequence_No]
    FROM destination.sys.tables TAB
    INNER JOIN destination.sys.schemas SCH
    ON SCH.[schema_id] = TAB.[schema_id]
    LEFT JOIN cte_Find_All_Referenced_Tables_In_Sequence STGB
    ON (STGB.[parent_object_id] = TAB.[object_id] OR STGB.[referenced_object_id] = TAB.[object_id])
    WHERE STGB.[Iteration_Sequence_No] IS NULL
    )
    , cte_Source_Table
    /*
    Get all the tables (from Source DB) along with following attributes :
    1) Full Name in the format <database>.<schema>.<tablename>
    2) No of columns
    3) Complete column list to be used in "INSERT INTO Destination... SELECT .... FROM Source"
    4) Checksum of all the important column attributes forming the Table Schema. This will be used to compare the tables between Source DB and Destination DB having exact Schema.

    *** Table having no rows will be filtered out / ignored.
    */
    AS
    (
    SELECT [TAB].[TABLE_CATALOG]
    , [TAB].[TABLE_SCHEMA]
    , [TAB].[TABLE_NAME]
    , QUOTENAME([TAB].[TABLE_CATALOG]) + '.' + QUOTENAME([TAB].[TABLE_SCHEMA]) + '.' + QUOTENAME([TAB].[TABLE_NAME]) AS [TABLE_FULL_NAME]
    , [COL].[NO_OF_COLUMNS]
    ,
    STUFF(
    (
    SELECT ', ' + QUOTENAME(COL.[COLUMN_NAME]) + ' '
    FROM source.INFORMATION_SCHEMA.COLUMNS COL
    WHERE COL.[TABLE_SCHEMA] = TAB.[TABLE_SCHEMA]
    AND COL.[TABLE_NAME] = TAB.[TABLE_NAME]
    ORDER BY COL.[ORDINAL_POSITION] ASC
    FOR XML PATH ('')
    ), 1, 1, '') AS [COLUMN_NAME_LIST]
    , COL.[COLUMNS_CHECKSUM]
    FROM source.INFORMATION_SCHEMA.TABLES [TAB]
    CROSS APPLY
    (
    SELECT COUNT(1) AS [NO_OF_COLUMNS]
    , SUM(CAST(CHECKSUM([COLUMN_DEFAULT], [IS_NULLABLE], [DATA_TYPE], [CHARACTER_MAXIMUM_LENGTH]
    , [CHARACTER_OCTET_LENGTH], [NUMERIC_PRECISION], [NUMERIC_PRECISION_RADIX]
    , [NUMERIC_SCALE], [DATETIME_PRECISION], [CHARACTER_SET_NAME], [COLLATION_NAME])
    AS NUMERIC)) AS [COLUMNS_CHECKSUM]
    FROM source.INFORMATION_SCHEMA.COLUMNS [COL]
    WHERE [COL].[TABLE_SCHEMA] = [TAB].[TABLE_SCHEMA]
    AND [COL].[TABLE_NAME] = [TAB].[TABLE_NAME]
    ) [COL]
    WHERE EXISTS
    (
    SELECT 1
    FROM source.sys.tables T
    INNER JOIN source.sys.schemas S
    ON S.[schema_id] = T.[schema_id]
    INNER JOIN source.sys.partitions P
    ON P.[object_id] = T.[object_id]
    WHERE S.[name] = [TAB].[TABLE_SCHEMA]
    AND T.[name] = [TAB].[TABLE_NAME]
    AND P.[index_id] < 2
    HAVING SUM(P.[rows]) > 0
    )
    )
    , cte_Destination_Table
    /*
    Get all the tables (from Destination DB) along with following attributes :
    1) Full Name in the format <database>.<schema>.<tablename>
    2) No of columns
    3) Checksum of all the important column attributes forming the Table Schema. This will be used to compare the tables between Source DB and Destination DB having exact Schema.
    4) Attributes indicating whether the table has IDENTITY column. This is to manage the IDENTITY_INSERT ON/OFF
    */
    AS
    (
    SELECT [TAB].[TABLE_CATALOG]
    , [TAB].[TABLE_SCHEMA]
    , [TAB].[TABLE_NAME]
    , QUOTENAME([TAB].[TABLE_CATALOG]) + '.' + QUOTENAME([TAB].[TABLE_SCHEMA]) + '.' + QUOTENAME([TAB].[TABLE_NAME]) AS [TABLE_FULL_NAME]
    , [COL].[NO_OF_COLUMNS]
    ,
    CASE
    WHEN EXISTS
    (
    SELECT 1
    FROM destination.sys.tables T
    INNER JOIN destination.sys.schemas S
    ON S.[schema_id] = T.[schema_id]
    INNER JOIN destination.sys.identity_columns I
    ON I.[object_id] = T.[object_id]
    WHERE S.[name] = [TAB].[TABLE_SCHEMA]
    AND T.[name] = [TAB].[TABLE_NAME]
    AND I.[is_identity] = 1
    )
    THEN 1
    ELSE 0
    END AS [TABLE_HAS_IDENTITY_COLUMN]
    ,
    STUFF(
    (
    SELECT ', ' + QUOTENAME(COL.[COLUMN_NAME]) + ' '
    FROM destination.INFORMATION_SCHEMA.COLUMNS COL
    WHERE COL.[TABLE_SCHEMA] = TAB.[TABLE_SCHEMA]
    AND COL.[TABLE_NAME] = TAB.[TABLE_NAME]
    ORDER BY COL.[ORDINAL_POSITION] ASC
    FOR XML PATH ('')
    ), 1, 1, '') AS [COLUMN_NAME_LIST]
    , COL.[COLUMNS_CHECKSUM]
    FROM destination.INFORMATION_SCHEMA.TABLES [TAB]
    CROSS APPLY
    (
    SELECT COUNT(1) AS [NO_OF_COLUMNS]
    , SUM(CAST(CHECKSUM([COLUMN_DEFAULT], [IS_NULLABLE], [DATA_TYPE], [CHARACTER_MAXIMUM_LENGTH]
    , [CHARACTER_OCTET_LENGTH], [NUMERIC_PRECISION], [NUMERIC_PRECISION_RADIX]
    , [NUMERIC_SCALE], [DATETIME_PRECISION], [CHARACTER_SET_NAME], [COLLATION_NAME])
    AS NUMERIC)) AS [COLUMNS_CHECKSUM]
    FROM destination.INFORMATION_SCHEMA.COLUMNS [COL]
    WHERE [COL].[TABLE_SCHEMA] = [TAB].[TABLE_SCHEMA]
    AND [COL].[TABLE_NAME] = [TAB].[TABLE_NAME]
    ) [COL]
    )
    /*
    Following SELECT statement will compare the tables between Source DB and Destination DB and fetch the tables having exact same Schema

    *** Salient features of the T-SQL "INSERT INTO Destination... SELECT .... FROM Source" generated from the below SELECT query :
    1) SET XACT_ABORT ON is used to rollback the transaction in case of error
    2) TRY .. CATCH block is used for each INSERT to catch the errors
    3) IDENTITY_INSERT srtting in the generated query is dynamic
    4) All the rows from Destination DB to Source DB will written in the same Sort Order as defined in Primary Key in the Source DB
    5) Generated T-SQL query will be well formated and each INSERT statement shall be properly commented for better readability
    6) Generated T-SQL query shall also include the statement to show the status of execution of each query viz. Started, Completed or Failed

    *** Disclaimer :
    Executing the generated T-SQL query multiple times will cause the duplication of rows hence take the necessary care.
    However, this can be make dynamic and rerunnable if all the tables has the Primary Key.

    */
    SELECT ROW_NUMBER() OVER (ORDER BY TBL_SEQ.[Sort_Order] ASC, TBL_SEQ.[Iteration_Sequence_No] DESC)
    AS [TABLE_SEQUENCE_NO]
    , [DEST_TAB].[TABLE_SCHEMA] AS [TABLE_SCHEMA]
    , [DEST_TAB].[TABLE_NAME] AS [TABLE_NAME]
    ,
    (CASE
    WHEN ROW_NUMBER() OVER (ORDER BY TBL_SEQ.[Sort_Order] ASC, TBL_SEQ.[Iteration_Sequence_No] DESC) = 1
    THEN 'SET XACT_ABORT ON;' + CHAR(13) + CHAR(10) + 'SET NOCOUNT ON;'
    ELSE ''
    END + CHAR(13) + CHAR(10) +
    '-- ***************************************** INSERT SCRIPT FOR ' + [DEST_TAB].[TABLE_FULL_NAME] + ' -******* START *****************' + CHAR(13) + CHAR(10) +
    'SELECT ''INSERT SCRIPT EXECUTION STARTED FOR ' + [DEST_TAB].[TABLE_FULL_NAME] + '''' + CHAR(13) + CHAR(10) +
    'BEGIN TRY' + CHAR(13) + CHAR(10) +
    CASE
    WHEN [DEST_TAB].[TABLE_HAS_IDENTITY_COLUMN] = 1
    THEN CHAR(9) + 'SET IDENTITY_INSERT ' + [DEST_TAB].[TABLE_FULL_NAME] + ' ON;' + CHAR(13) + CHAR(10)
    ELSE ''
    END +
    CHAR(9) + 'INSERT INTO ' + [DEST_TAB].[TABLE_FULL_NAME] + '(' + DEST_TAB.[COLUMN_NAME_LIST] + ')'+ CHAR(13) + CHAR(10) +
    CHAR(9) + 'SELECT ' + DEST_TAB.[COLUMN_NAME_LIST] + ' FROM ' + [SRC_TAB].[TABLE_FULL_NAME] + CHAR(13) + CHAR(10) +
    CHAR(9) +
    CASE
    WHEN [SRC_PK].[CONSTRAINT_NAME] IS NOT NULL
    THEN 'ORDER BY'
    + STUFF(
    (
    SELECT ', ' + QUOTENAME(COL.[name]) + ' ' +
    CASE
    WHEN IX_COL.[is_descending_key] = 1
    THEN 'DESC'
    ELSE 'ASC'
    END
    FROM source.sys.indexes IX
    INNER JOIN source.sys.index_columns IX_COL
    ON IX_COL.[object_id] = IX.[object_id]
    INNER JOIN source.sys.columns COL
    ON COL.[object_id] = IX_COL.[object_id]
    AND COL.[column_id] = IX_COL.[column_id]
    WHERE IX.[name] = [SRC_PK].[CONSTRAINT_NAME]
    FOR XML PATH ('')
    ), 1, 1, '')
    ELSE ''
    END + CHAR(13) + CHAR(10) +
    CASE
    WHEN [DEST_TAB].[TABLE_HAS_IDENTITY_COLUMN] = 1
    THEN CHAR(9) + 'SET IDENTITY_INSERT ' + [DEST_TAB].[TABLE_FULL_NAME] + ' OFF;' + CHAR(13) + CHAR(10)
    ELSE ''
    END +
    CHAR(9) + 'SELECT ''INSERT SCRIPT EXECUTION COMPLETED FOR ' + [DEST_TAB].[TABLE_FULL_NAME] + '''' + CHAR(13) + CHAR(10) +
    'END TRY' + CHAR(13) + CHAR(10) +
    'BEGIN CATCH' + CHAR(13) + CHAR(10) +
    CHAR(9) + 'SELECT ''INSERT SCRIPT EXECUTION FAILED FOR ' + [DEST_TAB].[TABLE_FULL_NAME] + '''' + CHAR(13) + CHAR(10) +
    'END CATCH' + CHAR(13) + CHAR(10) +
    '-- ***************************************** INSERT SCRIPT FOR ' + [DEST_TAB].[TABLE_FULL_NAME] + ' -******* END *****************' + CHAR(13) + CHAR(10) +
    'GO')
    AS [TSQL_INSERT_SCRIPT]
    FROM cte_Source_Table [SRC_TAB]
    INNER JOIN cte_Destination_Table [DEST_TAB]
    ON [DEST_TAB].[TABLE_SCHEMA] = [SRC_TAB].[TABLE_SCHEMA]
    AND [DEST_TAB].[TABLE_NAME] = [SRC_TAB].[TABLE_NAME]
    AND [DEST_TAB].[NO_OF_COLUMNS] = [SRC_TAB].[NO_OF_COLUMNS]
    INNER JOIN cte_All_Tables TBL_SEQ
    ON [TBL_SEQ].[TABLE_SCHEMA] = [SRC_TAB].[TABLE_SCHEMA]
    AND [TBL_SEQ].[TABLE_NAME] = [SRC_TAB].[TABLE_NAME]
    LEFT JOIN source.INFORMATION_SCHEMA.TABLE_CONSTRAINTS [SRC_PK]
    ON [SRC_PK].[TABLE_SCHEMA] = [SRC_TAB].[TABLE_SCHEMA]
    AND [SRC_PK].[TABLE_NAME] = [SRC_TAB].[TABLE_NAME]
    AND [SRC_PK].[CONSTRAINT_TYPE] = 'PRIMARY KEY'
    WHERE [SRC_TAB].[COLUMNS_CHECKSUM] = [DEST_TAB].[COLUMNS_CHECKSUM]
    OPTION (MAXRECURSION 10)
  • I have a database with no compression on the tables(source).  I want to create a blank database(destination) where I can create the tables with "data_compression = Page" and then Insert the data from the source database.  I will want to do this to about 20 databases.  Creating a New blank database will do 2 things, compress the tables and get rid of unused space without fragmentation.  Any help is appreciated.

    Thanks,

    Alex

  • GrassHopper wrote:

    I have a database with no compression on the tables(source).  I want to create a blank database(destination) where I can create the tables with "data_compression = Page" and then Insert the data from the source database.  I will want to do this to about 20 databases.  Creating a New blank database will do 2 things, compress the tables and get rid of unused space without fragmentation.  Any help is appreciated.

    Thanks,

    Alex

    To be honest, this is NOT the thing to do.  Page compression is a bad thing when there are a lot of mid-table inserts and updates.  Review the Microsoft recommendations before you compress any table or index.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • GrassHopper wrote:

    I have a database with no compression on the tables(source).  I want to create a blank database(destination) where I can create the tables with "data_compression = Page" and then Insert the data from the source database.  I will want to do this to about 20 databases.  Creating a New blank database will do 2 things, compress the tables and get rid of unused space without fragmentation.  Any help is appreciated.

    Thanks,

    Alex

    What is your core intention behind compression? If it's just saving space then you can do it on Cold Data (Tables/Partitions not accessed frequently). But if you intent to do it over Warm Data (Tables/Partitions accessed frequently) then I will also agree with Jeff Moden and recommend you to consider the trade-off of implementing the Compression and you must review the Microsoft recommendations.

    Compression is not free as other things in this world. We save storage space but at the cost of additional CPU overhead. SQL Server uses additional CPU for compression and decompression.

    You know your work-load better. You have to balance everything and then take the cautious decision.

  • The data will be used by 1 or 2 persons for about a month and then a small dataset will be extracted from this and only that will be used.  This is why I want to compress the data.  I will look into Microsoft's recommendation as you both stated.  Thank you.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply