Faster way to release the Unused Space back to Disk

  • Hi,

    I have few databases that have 8 data files allocated with 300 GB each and used only 110 - 120 GB each. So, I wanted to release the unused space back to the disk so that other databases can use the space.

    I'm using DBCC Shrinkfile option and its taking 7-8 hours per data file.

    Is there any other faster way to release the space back to the disk?

    Please suggest

  • Shrink in small chunks, but then also remember to perform index and statistic maintenance as index fragmentation will be shot to pieces.

  • DBCC Shrinkfile is the only way to release unused space back to the OS.

    Let me guess, either lots of heaps or lots of LOB columns?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • just modify the initial space and auto-growth option.

    If you right click on your database and select properties, then go to files. You will see the initial size details of your data-file. Just modify this by leaving "actual+20-30GB space" in the initial size of your data or log file. Do it one by one, not all file at same time (ideal time is when full backup is finished).

    Then, change the auto-growth in MB instead of %. I will advice to review your database growth and then give sufficient amount of mb in auto-growth as well. Because auto-growth also cause the IO and sometime impact performance when luck is not favoring you. So if you see too much auto-growth then configure with sufficient amount of space (1 gb or 2gb)

    ----------
    Ashish

  • Its the LOBs we have a huge column on few tables, is there a better way to maintain LOBs?

  • No, it's just that LOB columns massively slow down shrinks because of the way they pages link together.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I mean this is 2nd time I'm doing this. Its pain to check on the Shrinkfile SPIDS if there are any blockings or something. It takes 6-7 hours for 1 data file and I have 8 files to go on 1 database. I have 6 databases on different servers like that in same condition.

    In your experience, how can I identify to maintain these huge databases that gets fragmented too often.

  • Why are you ending up with so much free space that's not going to be reused?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Actually we took extra space while deploying the databases and implementing the application. And now we need that space for new deployments in that server. So, long story short we need the space which is not used.

    Also, what is size that can be on a general note can be determined as LOB? Can you possibly give an approx number?

    what are the preferred Database settings for these huge Databases.

  • Mac1986 (12/18/2012)


    Also, what is size that can be on a general note can be determined as LOB? Can you possibly give an approx number?

    Huh?

    LOB = Large Object. Nvarchar(max), Varchar(max), varbinary(max), XML data types.

    what are the preferred Database settings for these huge Databases.

    Defaults unless you have a good reason to change work most of the time.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • This is the second time? If you shrink a database at all, it should be one-off job following a one-off large data deletion. If it grows "too large" again, then the DB needs extra space for its operations.

    One thing to look at is if the autogrow settings are too large. If you have a 5GB DB set to grow 5GB at a time and it results in 48% free space, then you might want to shrink again (to a size that that includes a more reasonable amount of free space), then set the autogrowth smaller to a setting that allows for growth but makes growths unlikely to happen very often.

  • I have done the things that you have suggested but still in the next ETL run, again the Datafile size is boosted up and used space by the file is way less.

    Can you please suggest. What can be done now..

  • why dont you change your recovery model to "bulk logged" while your ETLs are scheduled?

    ----------
    Ashish

  • Mac1986 (12/21/2012)


    I have done the things that you have suggested but still in the next ETL run, again the Datafile size is boosted up and used space by the file is way less.

    Can you please suggest. What can be done now..

    If the space in the data file is used up and the file grows during ETL, then leave the data file alone, it needs to be the size it is.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • crazy4sql (12/21/2012)


    why dont you change your recovery model to "bulk logged" while your ETLs are scheduled?

    Not going to affect the size or usage of the data file in any way.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply