Shrink never ends

  • Hello, Sorry for my English.

    I am aware of the problems and why we should never perform a shrink, however, in this case it is necessary.

    I have a database where the mdf file is about 1.6Tb, but the database only uses 800Gb and I need to reduce it.

    I tried to reduce only 10Gb and after 8 hours I had to cancel the process.

    Anyone have any idea how to solve this?

    SQL 2016 Standard SP2-CU17.

  • It's going to take the amount of time it takes. To rewrite almost a terrabyte of data is just down to having enough disk space and disk speed. You're not going to be able to make it go faster.

    However, what you could do is take it in smaller pieces. Instead of one giant shrink, do three or four smaller shrinks. Overall, you'll spend more time. However, each one will complete.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • It partially depends on WHY you have so much free space in your database.

    If a lot of old data has been deleted and, like most people, end up doing just REORGANIZE on your indexes (especially true on larger indexes because people make the mistake of using "Best Practices" that actually aren't and were never meant to be), then the logical beginning of your database might be nearly empty and the end, the part that will shrink, is chocka-block full and it WILL take forever to move all that. At the  very least, you MDF file could look like swiss-cheese.

    My recommendation is heterodoxic to most other folks recommendations but has worked very well for me.

    That recommendation is to first REBUILD ALL of your indexes starting from the smallest and going to the largest.  You might even be able to get away with a seriously good chunk of shrinkage with a "Truncate Only" and, if you "get lucky", the MDF file may shrink itself a bit because of end of file page de-allocations.  I've seen that happen just a couple of times and cannot explain it.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Yep, there's not much you can do to speed it up.  However, you can be sure to shrink only the specific file(s) you need to shrink, not the entire db.  If fact, NEVER shrink a db, only individual files, for example:

    DBCC SHRINKFILE(1, 1048576) /*shrink primary data file to 1TB; leave ~200GB for future growth and to avoid trying to shrink too much*/

    I've found that often it can take just as long to shrink by 10GB as by 500GB.  I can't explain why, but I have seen it several times.  So, I suggest do the full shrink and just wait for it to finish.

     

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • As Scott mentioned, only shrink files, one at a time. This often seems to work quicker to me as well, when a database shrink seems to take forever.

  • ScottPletcher wrote:

    Yep, there's not much you can do to speed it up.  However, you can be sure to shrink only the specific file(s) you need to shrink, not the entire db.  If fact, NEVER shrink a db, only individual files, for example:

    DBCC SHRINKFILE(1, 1048576) /*shrink primary data file to 1TB; leave ~200GB for future growth and to avoid trying to shrink too much*/

    I've found that often it can take just as long to shrink by 10GB as by 500GB.  I can't explain why, but I have seen it several times.  So, I suggest do the full shrink and just wait for it to finish.

    True Dat!!!

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply