Log file is growing to 120+ GB for a database of size 20GB

  • Hi All,

    We have a SQL database(size 20 GB) used by a 3rd party application called surfcontrol.We need to purge data from this database keeping 3 months of historic data.

    Now when we start the purging task ,the log file grow to occupy the complete disk space (disk size is 125 GB),as delete is a logged command.

    The recovery model of this database is simple.

    Initially we tried with 48GB of disk but when the log file grow to 48 GB and the purge failed,we increase the space with a thaught that space is the issue.

    But now I realised that there is some other problem.How can 20 GB of data become 120+ GB and still run?

    I am unable to figure out the root cause of this problem and the way to fix this.

    It will be great help if some one can throw light on this.

  • What Error are you getting?

    There can be many errors and Reasons.. check this out: http://support.microsoft.com/kb/317375

  • Do not run this as a single delete. If you do, it's one commit, and it's one transaction, and it's still logged in the transaction log. You want to delete slices of data, maybe 1000 rows at a time. That way your simple mode will truncate the log every few minutes.

  • If you post your current DELETE code, I am sure that there are individuals out here who help you modify it to delete small batches and thereby control the size of your transaction log.

    DELETE TOP (1000) ... (read BOL for more info) is where you may want to look at.

    😎

  • The best i would do is select the rows that are to be keptin a new table and rename the old one to someting and rename the new one to the original and then drop the table.

    This should be faster and should take less log space.

  • if you are going to create new table and then rename it, make sure you have a the table script of existing table with all table objects.

  • Mkumari (7/8/2008)


    The best i would do is select the rows that are to be keptin a new table and rename the old one to someting and rename the new one to the original and then drop the table.

    This should be faster and should take less log space.

    Just remember, that to do this, there must be time where no one is accessing the database. If, not, you may loose data that you needed to keep due to the original table being updated before you can rename the original and the new tables.

    Also, you didn't indicae if you are deleting data from a single table or multiple tables. That will add to the complexity of the project.

    😎

  • Hi

    You could put your db into simple recovery model and then delete small "portions" of data.

    "Keep Trying"

  • set the recovery model to "bulk logged" during the purge and change back to simple after you have finished deleting data

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply