OK, I know all the reasons I shouldn't be shrinking a database, but there's really no other option. Also, this is a one-time only shrink! Recently I had to add a clustered index to a Heap of about 1/2 billion rows. Problem being, there wasn't enough disk space to just add the CI. BUT there was enough to "duplicate" the table into a new table with the CI already defined. Once this process completed, we truncated and dropped the old table.
This freed up about 80GB within the DB file. The problem is, we now only have about 8GB free on the disk, and the SAN admin was complaining the other day about the storage being used by the various servers. So I can't request more storage for this VM...
I've read the articles on why shrinking is a bad idea (the fragmentation that *will* result,) and I want to both try to minimize the index rebuild, and the log usage.
So my thinking is to shrink the file by a couple GB at a time (maybe 10GB,) then do an index rebuild, then repeat. Possibly if the shrink at that size doesn't take too long, I'll do several in a row. The DB is in Simple recovery, so this should also keep the log from exploding. I can work on this on the weekend without worrying about the users.
I'm going to leave some free space in the file to avoid growth later, and the DB doesn't grow all that much (it seems so far they add X rows and delete Y rows, and X almost equals Y)
Does this sound like it should work?