Files not shrinking with DBCC SHRINKFILE after DBCC INDEXDEFRAG frees pages

  • I ran DBCC INDEXDEFRAG on an index last night, and after 11.5 hours, it made it 23% of the way through the index and had freed up 4,863,604 pages (43% of the original allocation).

    According to my calculation, that's 37.1 Gb of disk space.

    Although I had to cancel the INDEXDEFRAG before it was finished (was causing too much slowdown for daytime hours response requirements), sysindexes still shows the size of the updated index to be much less than before.

    Now I ran a DBCC SHRINKFILE(file_where_index_lives) and although it shrunk a little, after two complete passes, the file that started at 180Gb is only down 5Gb.

    DbIdFileIdCurrentSizeMinimumSizeUsedPagesEstimatedPages

    510219015281280002172300821723008

    My understanding is that "EstimatedPages" is the smallest the file could possibly be, and according to this, it's that small, but if I just freed up 4.8 million pages, I don't understand why the file wouldn't be able to shrink more.

    BOL says if you don't specify a target size to DBCC SHRINKFILE that it will use a "default size." Not sure what default it's talking about -- the "MinimumSize" ?

    Thanks,

    Mitch

    ETA: Server Info:

    Microsoft SQL Server 2000 - 8.00.2055 (Intel X86) Dec 16 2008 19:46:53 Copyright (c) 1988-2003 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)


    --Mitch

  • Try executing this command dbcc shrinkfile(fileid,target_size)

    Target file size value should be greater than the used file size value

    For example if used file size is say 90000 MB against total file size of 150000 MB to reclaim around 50000 MB of free space execute this command

    use

    dbcc shrinkfile(fileid,100000)

  • Thanks. I'll give that a shot now.


    --Mitch

  • Shrinking data files fragments objects inside the data file as SQL rearranges pages. This will effectively negate the index defrag you have just done. Unless you defrag again after the shrink file.

  • Despite having shrunk the index from 11,335,830 to 387,849 pages (a reduction of 83.5Gb in 8k pages), I've only been able to sequeeze out a few Gb of disk space (about 27Gb total, including reduction in a separate filegroup from rebuilds of other indices).

    So, still doesn't quite make sense to me.

    I see now where some of these indexes should have had a fill factor set (this is a project from 6 years ago) and may try to find some outage time where I could completely recreate these with a corresponding fill factor.

    Thanks for all the suggestions.

    -- Mitch


    --Mitch

  • Take note of what Edogg says. Shrinking a database file will disorganise the data within the file and cause NTFS fragmentation. Both of these will harm the performance of SQL Server.

    You should only shrink a database file if you do not expect it to grow again within 3 months. If it is likely to grow to near its original size then you need more disk space, not more time to run Shrinkfile.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • The large tables in this project don't grow, so long as our purging jobs complete in a timely manner. We keep about 500,000 Windows computer inventories in this database. Old inventories are deleted as they age out of the project's specs. for retention. Thing is, that's about 20,000 inventories per day, or about 4%. The deletion of these leaves huge holes and heavily fragmented indexes.

    Because the pages don't always get entirely freed up in the deletion process (some inventories are kept permanently, and other longer than usual, but not permanently) the number of pages in a table tends to grow at a faster rate than the actual number of inventory records retained.

    It was our goal to compact these indexes to provide more free pages in the actual db file, and ideally, even reclaim some of this space back to the OS for other use.

    -- Mitch

    P.S. "Disk is cheap" only works in a non-enterprise environment where you can actually go buy a TB for a hundred bucks. Unfortunately, in our environment, that same hundred dollar bill only gets you about 2Gb on the SAN. 🙁


    --Mitch

  • How often do you do the shrink? If you do it daily, then stop it running for a week and see what impact this has on your overall file sizes. If you do it weekly, then stop it running for a month.

    You should keep doing the purge and reindex on the normal schedule, just stop the shrink running.

    If at the end of this time there is next to no impact on file size and total free space, you have saved yourself doing a maintenance operation. All you then need to do is defragment the disks at the NFTS level to regain some performance lost due to fragmented disk files.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply