November 21, 2015 at 7:05 pm
Hello folks,
I have a 600 gb drive where there is a 570gb mdf file and a 10 gb ldf file. The DB have varbinary and xml and varchar(max) fields. This DB has around 10 years worth of data and we were told to delete 5 years worth of data which we did and told to reclaim the space. Now after deletion I am left with only the shrink option as business doesnt intend to add more storage as we would never keep more than 5 years of data.
I did a shrink file of the mdf using the GUI , but after 4 hours i had to terminate as it did not seem to be moving anywhere, plus the ldf started to grow while shrinking and would have been without storage on the hard drive. I did look at the below link but not sure how to proceed.
http://www.sqlskills.com/blogs/paul/why-lob-data-makes-shrink-run-slooooowly-t-sql-tuesday-006/
I can get an temp external drive and change the transaction log location to it for interim and revert it back after shrink.
I am not a top notch DBA by any stretch of imagination, but the role demands to do the task. Any advice on how to accomplish a shrink of DB with LOB ??
November 21, 2015 at 9:20 pm
One attempt I tried is, changing the Recovery model to Simple. Stopped mirroring and took a transaction log backup. I restarted the shrink process.
To verify the progress, i run the below command.
SELECT percent_complete, start_time, status, command, estimated_completion_time, cpu_time, total_elapsed_time
FROM sys.dm_exec_requests
SELECT CASE WHEN ((estimated_completion_time/1000)/3600) < 10 THEN '0' +
CONVERT(VARCHAR(10),(estimated_completion_time/1000)/3600)
ELSE CONVERT(VARCHAR(10),(estimated_completion_time/1000)/3600)
END + ':' +
CASE WHEN ((estimated_completion_time/1000)%3600/60) < 10 THEN '0' +
CONVERT(VARCHAR(10),(estimated_completion_time/1000)%3600/60)
ELSE CONVERT(VARCHAR(10),(estimated_completion_time/1000)%3600/60)
END + ':' +
CASE WHEN ((estimated_completion_time/1000)%60) < 10 THEN '0' +
CONVERT(VARCHAR(10),(estimated_completion_time/1000)%60)
ELSE CONVERT(VARCHAR(10),(estimated_completion_time/1000)%60)
END
AS [Time Remaining],
percent_complete,
* FROM sys.dm_exec_requests
WHERE percent_complete > 0
But weirdly, the "Time Remaining" showed 00:10:10 so I was expecting the shrink to be done in 10 minutes and 10 secs. After 5 minutes , I rerun the above command the the Time Remaining which should show only 5 mins, now shows 17 min !!!!! Not sure what is going on??
AND the percent_complete is always at 83%
November 23, 2015 at 6:36 am
My advice is two-fold.
First, don't try to shrink while other people are using the database. Find a maintenance window. Change the db to Single User if you have to. Keep people off or they will cause the shrink process to take forever while they are busy filling up all that empty space you just got in the db.
Second, do it in small increments. I usually run a manual checkpoint command against the database, then shrink larger databases in increments of 1-5% at a time. It goes faster then trying to shrink it all at once. Even though you have to keep doing shrinks (which can be a little annoying), it gives you an idea of how much progress you're making. Plus, with smaller increments, you don't have to cancel your shrink in the middle of it. You can just let it finish, then put the db back to multi-user when the maintenance window is up, then pick up where you left off during the next window.
EDIT: I do NOT recommend changing the recovery mode of the database so you can do a shrink. That just screws with your backup chain and doesn't really buy you a whole lot. Even in SIMPLE mode, the transaction log is used. The only difference is you can't recover that data with restores if you leave it that way.
November 23, 2015 at 8:56 am
Brandie,
I would be able to accomplish the first part which is to have it single user and have no inbound data to it during shrink.
For the second part, I think you are saying, if the i originally intend to shrink the DB by 50 gb, for now, try to shrink the db by 3 gb first, then once it is done, shrink again by 5 or in small increments.
Current DB is 600 . Expected shrink saves 60 so..
dbcc shrinkfile (databasename, 609280)
dbcc shrinkfile (databasename, 604160).
Wouldn't these multiple shrinks, mess up the fragmentation and isn't there any other way to resolve this when LOB exists.
November 23, 2015 at 9:02 am
Any shrink is going to fragment indexes. Your issue seems to be that you want to see results sooner rather than later. Also, mini-shrinks do work on larger databases without binding things up for long periods of time.
Shrink in smaller increments then, after you're done with all the shrinking, update statistics and rebuild the indexes. Both of these will increase the size of the database again, but you shouldn't see it as large as it currently is.
November 23, 2015 at 9:02 am
One shrink, multiple shrinks, both going to cause index fragmentation. Shrink it in small chunks, that's the usual advice for large shrink operations, LOB or no LOB.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 23, 2015 at 9:53 am
One twist in the complexity is that, this DB is mirrored. If i were to do a shrink in increments, will the mirrored DB also be shrunk or do I have to perform this shrink on that DB also. And being mirrored, would that also cause increase in time for even increment shrink.
November 23, 2015 at 10:19 am
My best advice is to suspend the mirror while doing the shrinks on the primary. There's nothing really you can do for the secondary. Unless you want to break the mirror and recreate it with the smaller primary db.
November 23, 2015 at 11:14 am
Shrink commands executed on the primary will get run on the mirror as well, so you shouldn't have to do anything special for the mirrored DB.
Cheers!
Viewing 9 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply