June 4, 2012 at 9:52 am
I see that this topic has generated a lot of discussion. I am also in a situation where I want to delete records without the transaction log growing exponentially and leading to a case where I run out of disc space. Here is the real world situation.
I have a db that contains about 350 million records and has about 6 years worth of history. We do daily backups of the db nightly - therefore, I have a copy to "rollback" if I run into any issues. I want to delete about 3 years worth of data from the db (about 200 million records). I want to save off a copy of the backup (since, as one said, storage is cheap). However, as many db admins know, asking IT for increased storage to handle log files can be a chore and, if outsourcing, can take several days to accomplish.
So, in essence, I want to get rid of the records without a lot of internal headache. I have no problem deleting in chunks but the problem is that I do not know how much space will be used when deleting the records and I face the risk of running out of disc space. Can I set the log to a limit temporarily without interrupting the deletion process?
Now, I do not have an explanation as to why the db got so big...bad maintenance plan. However, once I get the db to a manageable level, an archiving/deletion plan will be put in place.
Any suggestions/comments/thoughts would be appreciated.
June 4, 2012 at 10:03 am
There is no way to delete without logging.
You can set the log to a fixed size, but if it fills every single query that tries to modify the database will fail, which would probably be a bad thing.
What recovery model? How much do you need to keep vs what you're deleting? Do you have a test DB you can try stuff out on?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 4, 2012 at 10:11 am
I would think about setting a process that deletes a few chunks, and them take a log backup. Monitor the size of the backups. If it's low enough, raise the size of the chunk.
note that your log backup size in aggregate will still be large. You can't get away from that, so storage might be an issue somewhere. You could plan this to be right after the a full backup so that if something is wrong, you can easily, or quickly, get back to the full backup. And you can whack the log records if you don't need them.
Or schedule a full before and after the process, so that if you need to delete log records from the process, you can.
Viewing 3 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply