Jesus this article is downright dangerous. Follow at your own peril and only after making sure you have an updated resume.
To start by claiming that shrinkfile magically causes contention correlated with how many pages are in your database is ridiculous; remembering the article quotes 50,000 pages or 400MB. Shrinkfile only locks one page at a time as it is moved, not permanently, and I've seen it run on extremely busy databases for hours at a time with zero impact. That's not to say there may be some workload out there which won't tolerate it but for our enterprise we consider it a safe operation when required.
However it's the proposed solution that gets me.
For a listed procedure that will take out shared locks on an entire table for hours while selecting the data into another table, kill any running processes during switchover (stored procedures with recompiled portions, and any updates in snapshot isolation), plus lose updated and inserted data between the two steps, and rely on further scripting to make sure no permissions are lost... that's a real liberal definition of "no business impact"!!!!
Pretty sure I know which option the CFO is going to go for.
I feel that the number of situations in which this will help are quite limited though... How many people have 120GB databases that really only need 20GB of space?
I can answer this. An enterprise solution which does constant ETL and stores some blobs in a heap table. It was soon consuming 150GB. It took me a long while to work out what was going on (there were a lot of red herrings), and notify the vendor who had no idea it could happen and frankly wasn't very interested in fixing it. It now has a maintenance window to add/remove a clustered index once a week. Shrinks down to 20GB.