Reduce size of BAK file after deleting database table records?

  • Hi All,

    Firstly wanted to wish everyone a Happy Christmas and I hope someone may be able to help me with my issue 🙂 I have a DB with several tables one of which receives a constant stream of data from meters (1 minute data). The DB and the backups (.bak) are now getting very large. We decided to create a job that will delete several thousand records every hour to gradually reduce the file size and (we thought) the size of the backups (.bak) but unfortunately the backups are still growing :crying:

    I have looked into using SHRINKDATABASE and Optimize Table <table name> but being honest I am unsure what they do and if they will help. If anyone has any ideas or if you could post a link to another post I could read it would be really helpful. Thanks in advance 🙂

    Kind Regards,

    Craig

    Specifications:

    - Windows Server 2008 R2 Standard (SP1)

    - SQL Server 2008 R2

  • Can you post the table definitions for the tables you are deleting from?

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Hi Perry,

    Thanks for the quick reply 🙂 I have attached an excel sheet with the table definitions.

    Hope its ok.

  • Database backups are roughly the size of you data, not the size of you database files. This is assuming no compression. If the .BAK file is still getting bigger it's either because the amount of data is growing (despite the deletes) or possibly (but unlikely) you have the backup doing an append. If this is something like an electricity utility getting meter readings every minute could be adding more records than your delete process is removing. Consider using partitioned tables to split the data a bit, then you can just drop one of the partitions instead of doing an expensive and slow delete.

    Leo

    Leo
    Nothing in life is ever so complicated that with a little work it can't be made more complicated.

  • Good Morning Leo,

    Thanks for the reply 🙂 When taking the backups, in the overwrite media section, I have selected to "Back up to the existing media set" and the sub-selection "Append to the existing backup set". Could this be the problem? I was worried that if I chose to "overwrite all existing backup sets" and the latest backup was corrupted then all would be lost :crying:

    If you could give me some advice on what you think an ideal configuration would be I would love to test it on my test environment 🙂 and I really do appreciate the help.

    Thanks again,

    Craig

  • the below script can help you to see and anlalyze that which tables and indexes which are taking huge space so that you can easily check where the actual space consumption

    for table http://www.develop-one.net/blog/2011/06/20/SQLServerGettingTableSizeForAllTables.aspx

    and for indexes

    SELECT

    i.name AS IndexName,

    SUM(s.used_page_count) * 8 AS IndexSizeKB

    FROM sys.dm_db_partition_stats AS s

    JOIN sys.indexes AS i

    ON s.[object_id] = i.[object_id] AND s.index_id = i.index_id

    WHERE s.[object_id] = object_id('dbo.TableName')

    GROUP BY i.name

    ORDER BY i.name

    SELECT

    i.name AS IndexName,

    SUM(page_count * 8) AS IndexSizeKB

    FROM sys.dm_db_index_physical_stats(

    db_id(), object_id('dbo.TableName'), NULL, NULL, 'DETAILED') AS s

    JOIN sys.indexes AS i

    ON s.[object_id] = i.[object_id] AND s.index_id = i.index_id

    GROUP BY i.name

    ORDER BY i.name

    -------Bhuvnesh----------
    I work only to learn Sql Server...though my company pays me for getting their stuff done;-)

  • craig.dixon (12/5/2012)


    Good Morning Leo,

    Thanks for the reply 🙂 When taking the backups, in the overwrite media section, I have selected to "Back up to the existing media set" and the sub-selection "Append to the existing backup set". Could this be the problem? I was worried that if I chose to "overwrite all existing backup sets" and the latest backup was corrupted then all would be lost :crying:

    If you could give me some advice on what you think an ideal configuration would be I would love to test it on my test environment 🙂 and I really do appreciate the help.

    Thanks again,

    Craig

    If you're constantly appending the backups then that is why the file continously grows. Try implementing an intelligent backup script that creates each file as a new file using details such as servername, database name, backup type and date time stamp. So for instance

    Mysqlserver_Mydb_Full_20121205_113000.trn

    Mysqlserver_Mydb_Tlog_20121205_020000.bak

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply