I have a similar problem. Our largest database recently went on a diet. I had been urging the business to archive the really old data in the biggest tables. I got the permission and created an archive version of database with the key tables that were going to be archived. All that went smoothly. But we ran into space issues on the drive because the primary database didn't release any space.
I attempted to run a ShrinkFile Truncate Only but that didn't reclaim any space (but it did run really quick!). As I understand it (this is from my earlier days as a mainframe DBA), you can think of your .mdf file as a big bucket filled with odd sized objects. If your bucket's "high used mark" is near the top (i.e., the end of the file in SQL Server), releasing unused space won't do very much. But if you remove a lot of these odd sized objects and the high used mark is much lower, you can lower the size of your bucket (i.e, reclaim space on your .mdf file). For me, I would have to try something different from the Trucate Only option because there was data pages toward the end of the file.
I then attempted the ShrinkFile like you did and after two days I killed it. I do believe it was legitimately defragging the file so it could be shrunk, but as another responder on this thread mentioned, there is a lot of overhead when you are defragging a data file block-by-block. Sometimes it has to move another block to get target block into the slot. I did find better results when I dropped down to 2-3 GB reductions for my ShrinkFile. This took about 45 minutes per GB. So if I wanted to proceed until I reclaimed all unused space, I would be screwing with this thing for days.
So you can have an enormous amount of work being done and not seeing very much results. So much, there almost doesn't seem to be enough reason to do it. This is especially true when you find out that all the reorganizing of the pages is actually ***causing new fragmentation***. You see, you might have a lot of wasted space, but the data can actually be fairly sequentially laid out. Now the ShrinkFile has taken pages at the logical and physical end of the sequence and moved them to open spaces in the physical beginning of the file. Your users will be pleased that you saved space, but mad as hell that you messed with the performance. You should run an index rebuild afterwards to fix this problem. But unless your SQL Server edition is Enterprise, an index rebuild has to be done with the index offline. So in order to regain the performance you lost, you will have a period of badly performing queries if they are attempting to use the indexes that are currently being rebuilt.
So you really have to ask yourself whether it is worth it. Are you doing it just to save a little space (even two TB today isn't all that expensive)? Then let your manager know what the cost will be in the form of your time and downtime. You might find that it is better to keep things as is. Also, keep in mind that if you data will **ever** grow back that big, you will avoid the overhead of formatting the new data extents on the file.
Personally, I think this might be an opportunity for the third party tools vendors to come along and create a better utility to accomplish this since Microsoft probably will never do it.