Need to get some disk space back from a table, need to shrink...

  • OK, I know all the reasons I shouldn't be shrinking a database, but there's really no other option. Also, this is a one-time only shrink! Recently I had to add a clustered index to a Heap of about 1/2 billion rows. Problem being, there wasn't enough disk space to just add the CI. BUT there was enough to "duplicate" the table into a new table with the CI already defined. Once this process completed, we truncated and dropped the old table.

    This freed up about 80GB within the DB file. The problem is, we now only have about 8GB free on the disk, and the SAN admin was complaining the other day about the storage being used by the various servers. So I can't request more storage for this VM...

    I've read the articles on why shrinking is a bad idea (the fragmentation that *will* result,) and I want to both try to minimize the index rebuild, and the log usage.

    So my thinking is to shrink the file by a couple GB at a time (maybe 10GB,) then do an index rebuild, then repeat. Possibly if the shrink at that size doesn't take too long, I'll do several in a row. The DB is in Simple recovery, so this should also keep the log from exploding. I can work on this on the weekend without worrying about the users.

    I'm going to leave some free space in the file to avoid growth later, and the DB doesn't grow all that much (it seems so far they add X rows and delete Y rows, and X almost equals Y)

    Does this sound like it should work?

    Thanks,

    Jason

  • It would be best to do a shrink to the target size that you want to get the database file to, and then run a reindex or reorganize on all tables.

    For the larger tables, you will probably want to run an index reorganize to prevent an reindex from expanding the data file again.

    You can use this script to shrink your database data file:

    Shrink DB File by Increment to Target Free Space

    http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=80355

  • Another option is to find the largest table in the database, multiply it by 1.5. Once you have that number, shrink the file to leave at least that amount of free space in the data file.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • SQLRNNR (5/7/2013)


    Another option is to find the largest table in the database, multiply it by 1.5. Once you have that number, shrink the file to leave at least that amount of free space in the data file.

    Which sounds like a nice round number to me. Doesn't free up quite as much as I'd like, but still frees up about 30GB

  • jasona.work (5/7/2013)


    SQLRNNR (5/7/2013)


    Another option is to find the largest table in the database, multiply it by 1.5. Once you have that number, shrink the file to leave at least that amount of free space in the data file.

    Which sounds like a nice round number to me. Doesn't free up quite as much as I'd like, but still frees up about 30GB

    I should probably have included why to do that in my post.

    The reason is to avoid the multiple IO penalties to grow/shrink the file. If you leave that kind of free space, it permits the reindex operation to function without (hopefully) causing the database to grow.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • So just one big shrink operation, rebuild the CI and NCIs, and done, correct?

  • That would be correct. After the shrink, you will need to rebuild due to the fragmentation the shrink will cause.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Gotcha.

    If it weren't for the disk space, I wouldn't be bothering with this, but...

    Come to find out, part of the reason this wasn't an issue with the previous table is it was page-compressed...

    Kicking around the idea of doing this to the new table, but my concern after reading up on it on MSDN is it will essentially (again) duplicate the table while it compresses it. Which will put me right back here again...

    But, sp_estimate_data_compression_savings estimates it will cut the size down to 1/3 of what it is now, so even if it only halves it, that's still a lot of disk back...

    Thanks again!

    Jason

  • Well, thanks for the suggestions, but continued research, come to find out the other DBs on the server live on another drive entirely.

    Plus I freed up ~40GB by deleting my backup file from the weekend when I started trying to add the Clustered Index.

    So, no shrinking, no compression for me!

  • That's always a nice surprise

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • SQLRNNR (5/7/2013)


    That's always a nice surprise

    The benefits of inheriting a SQL infrastructure and having to hit the ground running at full speed...

    You occasionally get nice surprises from things you didn't check out as completely as you could've...

    🙂

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply