halifaxdal (10/1/2013) Jeff Moden (10/1/2013)
I did the following to empty ldf: 1. Put the recovery mode to simple; 2. dbcc shrinkdatabase('database_name')
This made the ldf to minimum from 99GB down to 1M
I then tried to delete redundant records in one of the problematic table that has 25877065 records, apparently my command is wrong:
delete table where id > 20877065
the ldf file gets increased immediately and dramatically to GB level and the query is still running. I don't know if it would fail again due to running out of space. Should I continue waiting or should I terminate it?
And how to remove those redundant records?
Heh... you move too fast. Shrinking the database just fragmented every index you have to 99%. If you have Point-in-time backups running, you also just broke the log chain by putting the DB into the SIMPLE recovery mode. If you don't have Point-in-Time backups running, you should seriously consider it even if it's "just" a Dev database.
If you're doing deletes and you didn't take a backup just prior to doing the deletes, you may hate yourself in the morning when someone says you deleted to much or you deleted the wrong data.
Thanks for the advise, I am comfortable when I repaired the DB as it is not an important DB and used by only a handful users.
Heh... I've gotta disagree... if it's worth having, it's worth having right. All databases are important or they can be detached and dropped. ;-)
is pronounced ree-bar and is a Modenism for R
First step towards the paradigm shift of writing Set Based code: Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
Although they tell us that they want it real bad, our primary goal is to ensure that we dont actually give it to them that way.
Although change is inevitable, change for the better is not.
Just because you can do something in PowerShell, doesnt mean you should. Helpful Links:
How to post code problemsHow to post performance problemsForum FAQs