Interesting technique, this might prove useful thanks.
I feel that the number of situations in which this will help are quite limited though... How many people have 120GB databases that really only need 20GB of space?
In many cases, databases have auto-grown to such sizes and analysis into the growth is essential - If I may be so bold, you would do well to highlight the need for an in-depth analysis of the database's size before recommending people jump in and shrink stuff as this is, in my experience, only ocassionally the best course
If the database was over-provisioned, it would be using the provisioned space until it needed to grow so sys admins would not have noticed space 'running out' because it wasn't growing (unless it was a shared server and it was other databases growing that is causing the problem - again in this case, just picking a database at random to shrink as a sort of sacrificial lamb is a dangerous proposition, the developers/vendors may well have had a damned good reason to ask for it to be provisioned at that size)
So the question people should be asking - before they dive in and shrink what they think is an overprovisioned database - is why has a database file with only a few gig in it grown to 125,682MB? (OK I understand in this case if the DBA has specified literally "128GB", it might have created a weird sized file like that when displayed in MB)
If this file has actually grown to this size in spite of only containing a small amount of data, we need to ask if it is still growing and if it is, why?
In my experience, stuff like this is often caused by something like analytics stored procedures that create local tables instead of using tempdb when generating reports and metrics. As the end of day, end of week or end of month reports run, loads of new tables (and probably indexes) are created in the PRIMARY file group by some sprocs, data is mashed, reports are created and then the new tables are 'tidied' and deleted afterwards.
So some DBAs will go to all this effort to shrink the database then on the following day, week or month, the database will grow right back up to 125000+ MB and they will be back to square one having wasted some time and slowed down the reports as the database had to auto-grow again to accomodate the reports.
I apologise if this seems overly critical, that is not the intent, it's an interesting technique for shrinking a database file and reducing the impact this has on a production environment - one I may well use in future. I'm just concerned that a database or file shrink is not very often an advisable practise!
^ Thats me!
01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010