What is the Best way to shrink/truncate a data file which has got over 460GB of Unused space?

  • Hi All,

    I am talking here about a DB which has a data file of 550GB in which around 460GB is unused. I want to shrink that data file to a level where there is only around 10% of unused space available. Currently the amount of unused space is over 85%. Below given are the options I have tried:

    1) Using DBCC shrinkfile with a target size. Plan was to shrink the data file in chunks of 50GB for 7-8 times. -- Didnt work. ( no error but it was just not executing )

    2) Using Import/Export data after creating a blank DB where only the data will be transferred and not the unused spaces -- Didnt work ( Because only the tables were getting imported, other objects like functions,procedures were not getting imported. )

    3) Using Copy Database wizard by creating a destination database on the fly -- Didnt work ( As the unused space that was there in the Source database also got copied into the new destination DB )

    4) Using DBCC Shrinkfile with emptyfile options -- Didnt work ( Because even though I was able to transfer data, I was unable to remove the source file as it a part of the primary group.)

    5) Using DBCC Shrinkfile with truncateonly -- This options seems to be working, is there any precaution that you want me to take (other than a full backup) to make sure that nothing goes wrong in a prod server.

    6) Last options left for me is I guess scripting all the objects of the whole database, which to me is a very expensive options.

    I request you all to review my options and let me know as to what is the best option available. Incase anybody knows anymore options then please let me know about it.

    Thanks in ADVANCE!!

    -- Jaideep

    SQL Server Developer DBA

  • If possible, frist of all run DBCC CHECKDB to make sure there are no error into the database.

    Take Backup as you have mention.

    Perform the Shrink in small chunk i.e. 500 MB to 1 GB first and than see if it's working.... also be patient sometime it may take longer than you expect. Preferably do it when server is idle or very less activity.

    B'cause sometime if you shrink 10-15 GB at a time it may create problem.

    "Thare are only 10 types of people in the world:
    Those who understand binary, and those who don't."

  • [font="Verdana"]Is there any time constraint on this operation? Can you keep the users away for an extended period of time from the database? If that is possible then the below list of commands will work as, we did that for an 800GB database recently and fortunately we did not have any corruption either before or after the shrink. But, it took about 22hrs to complete.

    DBCC TRACEON(3604)

















    -Hope is a heuristic search :smooooth: ~Hemanth
  • Have you done a load of deletes to suddenly free up all this space or was it simply created oversized to start with? I'd want to know why there's 400Gb to chop off the file before I started.

    Anyway, if you want to go ahead I'd use a script that loops around to take it in small chunks. That way you can take off 10, 100 or 1000Mb a time thereby controlling how long it's in shrink-mode for. I've got an example somewhere of a script I use (stolen from some website a while back). If I could work out how to add it to this post I'd attach it....



    Oh and make sure you have a backup. A working backup that you can restore from before undertaking anything too drastic.

  • dont forget to rebuild all the indexes afterwards too


    "Ya can't make an omelette without breaking just a few eggs" 😉

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply