Ye Old Shrinkfile

  • First post! Let me start by saying I am a google-taught DBA only... smile

    I am currently running 2012 Business Edition that has a RAID5 array for the data partition with 850G total disk space. The logs partition is a separate mirrored RAID drive. We are currently in the process of buying a new HP server that will have a 3TB ioDrive2 PCI card in it for storage.

    I run the well known script every night that reorgs/rebuilds the indexes on all tables. We use simple recovery mode, with weekly full backups and nightly differential backups.

    So here is the problem.... All the drive bays are full on the server, and the data partition is down to 20g free. I had originally planned to already be off the server by now, but procuring the new server has taken longer than planned. I put in some nightly scripts to delete old data from the version1 database. I have finished a big code redesign and have been migrating tables and stored procs into the version2 database. DBv2 is now starting to grow on its own and is slowly eating up the 20g free space left on the physical disk. DBv1 has about 300g free internally after I truncated and dropped some unneeded tables. So I need to get some space back from DBv1 so that DBv2 can have it.

    Each DB is just one file on the data partition and one file on the log partition. I ran a full backup and then tried running shrinkfile with truncate only option, but recovered no space. The only info I can seem to find on Google about using shrinkfile is "DONT DO IT!", so I am a little freaked out about using it. Most of those pages are referring to the 2008 version it seems.

    My plan is to shut down my major apps that insert a ton of data into the two DBs, perform another full backup, run shrinkfile on DBv1 using the "keep lowering the size in small chunks" method until I get about 200g free on the physical drive again. Then go thru all the tables and manually run reorg on all the indexes. Then full backup again, and turn everything back on...

    What I want is someone more experienced than me to: 1) tell me this is going to work and 2) offer any advice on things I may have missed. I have been monitoring the file growth and unfortunately the server has about 10 days of disk space left and the new server is supposedly two weeks away, but I obviously can't risk it.

    Thanks in advance!

  • Steven W (5/28/2015)


    My plan is to shut down my major apps that insert a ton of data into the two DBs, perform another full backup, run shrinkfile on DBv1 using the "keep lowering the size in small chunks" method until I get about 200g free on the physical drive again.

    Should work, may take many hours.

    Don't shrink the file to the minimum, leave some free space or the first thing SQL will do is grow the file again as data changes.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Quick question, how many tables are in the database?

    My normal approach to this kind of problems is to add a filegroup / data file, move the data to the new filegroup and then DBCC SHRINKFILE (<dbname>,TRUNCATEONLY)

    😎

  • 152 tables....

    But with only 20g free... I don't think I would have room to move the other tables over... The next smallest table is 40g....

  • Steven W (5/29/2015)


    152 tables....

    But with only 20g free... I don't think I would have room to move the other tables over... The next smallest table is 40g....

    Then the approach has to be a mixture of the two, shrink - move - shrink - move etc.. The reason for moving the tables to a new filegroup/file is that after the shrink they are like frogs in a blender, moving to another file/filegroup at least will "unblend" them so to speak.

    😎

  • Is it just the indexes that get fragmented? If so, can I just drop and recreate them all?

  • Steven W (5/29/2015)


    Is it just the indexes that get fragmented? If so, can I just drop and recreate them all?

    For clustered indexes (non-heap tables) No, for Non-clustered indexes Yes, for heaps No. As the process reads the structure of the table bottom up and takes the "last" part and moves it to the "free" space (slightly simplified), it is kind of an inside-out operation, just like socks would be wrong way around.

    😎

  • I don't use any clustered indexes actually... all of mine are non-clustered...

  • Steven W (5/29/2015)


    I don't use any clustered indexes actually... all of mine are non-clustered...

    That's not making things easier, may I ask why not?

    😎

  • I guess because I had never used SQL Server before and they sounded odd to me, so I've just always used non-clustered. Sounds like I should reconsider?

    Check out the uptime my server had.... 🙂

    Finally finished installing all security updates and am now attempting to shrink the file... Wish me luck!

  • I started the shrink file last night but I forgot to script it and instead just hit the "OK" button. I see the task running in activity monitor. If I wanted to stop it, could I just kill the process there? I think I set it to shrink too much space at once and was thinking about setting the amount to shrink a little lower.

  • Steven W (5/30/2015)


    I started the shrink file last night but I forgot to script it and instead just hit the "OK" button. I see the task running in activity monitor. If I wanted to stop it, could I just kill the process there? I think I set it to shrink too much space at once and was thinking about setting the amount to shrink a little lower.

    You can safely kill the process to stop it, that will not cause any problems, suggest you run dbcc shrinkfile with the truncateonly parameter immediately after killing the job to reclaim the space already freed up within the file.

    😎

  • I ran the following query to see the progress of the command...

    selectT.text, R.Status, R.Command, DatabaseName = db_name(R.database_id)

    , R.cpu_time, R.total_elapsed_time, R.percent_complete

    fromsys.dm_exec_requests R

    cross apply sys.dm_exec_sql_text(R.sql_handle) T

    It says 86.43314 percent complete now. But about two hours ago it was at 86.34299 percent complete. Since I just started the command yesterday evening, it seems like it must slow down near the end or something...

  • Steven W (5/29/2015)


    I don't use any clustered indexes actually... all of mine are non-clustered...

    Errr.. wha?

    That's a really odd decision seeing as the SQL storage engine is designed with the idea that most tables would have clustered indexes.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply