Unable to shrink an almost empty ndf file

  • Hello,

    I am running SQL 2008R2 SP1 CumUpdate4

    I copied a 300GB table out to a new filegroup (because to was too large to shrink after mass deletes)

    It ended up to be only 75GB in the new File Group Location.

    I dropped the original table.

    The ndf file where the old 300GB table will not shrink down below 52GB even though there is only 2GB of data left in the filegroup.

    I rebuilt every Table in the filegroup

    DBCC Shrinkfile (N'FileGroupName',1024) does not reclaim any space below 52GB

  • The database will not shrink well beyond its initial file size created. if the database is created with 200 GB initially. And later on you have 150 GB free space, this space will not be released to os or can be shrinked.

    --- babu

  • Maybe I'll try a shot in the dark then and:

    Create a 2nd file (FileGroupFile2) in the same file group and then

    DBCC SHRINKFILE (N'FileGroupFile1' , EMPTYFILE).

    Then ALTER DATABASE MyDatabase REMOVE FILE FileGroupFile1.

    Then DBCC SHRINKFILE (N'FileGroupFile2',1024) --Should let me do this now!

    (Jumping through a bunch of hoops to manage space)

    :w00t:

  • baabhu (3/16/2012)


    The database will not shrink well beyond its initial file size created. if the database is created with 200 GB initially. And later on you have 150 GB free space, this space will not be released to os or can be shrinked.

    --- babu

    This is not true. When you use dbcc shrinkfile you can shrink the file to a size bellow the initial file size. Check out this code, that shows it:

    --create the database with data file that is 1GB and log

    --file that is 300mb

    create database DemoDB

    ON

    ( NAME = DemoData,

    FILENAME = 'C:\DemoData.mdf',

    SIZE = 1GB)

    LOG ON

    ( NAME = Demolog,

    FILENAME = 'C:\DemoData.ldf',

    SIZE = 300MB) ;

    GO

    use DemoDb

    go

    --Check the DB size

    exec sp_spaceused

    go

    --Shrinking the data and the log file

    --and make them smaller then initial size

    dbcc shrinkfile(DemoData, 100)

    go

    dbcc shrinkfile (DemoLog, 50)

    go

    --check the size again

    exec sp_spaceused

    go

    drop the database

    use master

    go

    drop database DemoDb

    Adi

    --------------------------------------------------------------
    To know how to ask questions and increase the chances of getting asnwers:
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • According to :

    http://msdn.microsoft.com/en-us/library/ms189493.aspx

    Shrinking below initial size will only work if the database file is empty.

    My guess is that as soon as you add data (before shrink), you will be unable to shrink below initial size.

  • sbaker-757360 (3/16/2012)


    According to :

    http://msdn.microsoft.com/en-us/library/ms189493.aspx

    Shrinking below initial size will only work if the database file is empty.

    My guess is that as soon as you add data (before shrink), you will be unable to shrink below initial size.

    you miss read that. if you do not specify a target size for shrinkfile it will only shrink to the initial size. if you specify a target size smaller than the actual data it will only shrink the file to the size of the data otherwise it will shrink to the target size.

    personally i prefer to use shrinkdatabase. here is the code for it. make sure to run this from the database you are trying to shrink.

    USE dbName

    GO

    DBCC SHRINKDATABASE (dbName,10) -- 10 is the percentage of free space to leave.


    For faster help in answering any problems Please read How to post data/code on a forum to get the best help - Jeff Moden[/url] for the best way to ask your question.

    For performance Issues see how we like them posted here: How to Post Performance Problems - Gail Shaw[/url]

    Need to Split some strings? Jeff Moden's DelimitedSplit8K[/url]
    Jeff Moden's Cross tab and Pivots Part 1[/url]
    Jeff Moden's Cross tab and Pivots Part 2[/url]

  • I am specifying the size to shrink it to (1024)...and it does not work past 52GB

    Rebuilds were done on all tables in the file group

    The total actual data size for the file group is about 2 GB

    No reason for the shrink not to work except for the initial size issue.

    I do not want to use shrink database because it would unnecessarily run on a very large 125GB table in a different filegroup.

    I am 50% into EMPTYFILE according to:

    select percent_complete, estimated_completion_time, cpu_time, total_elapsed_time, * from sys.dm_exec_requests

  • Did you deletes of lots of table data in some cases rather than table drops?

    At any rate, just to be sure, I would run sys.dm_db_index_physical_stats () on remaining tables to check for ghost records.

    If you do find ghost records, just do a full scan of the table, that will force the ghosts to be fully deleted.

    I'm assuming no replication on any of these tables, as that might also cause issues getting rows fully removed.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • I dropped a 300GB Table.

    All the other tables are very small in this filegroup, and I ran rebuilds on all the remaining tables in the entire database.

    EMPTYFILE 77% complete. Lucky I have the disk space to do this, because it is doubling space used in this filegroup from 52 GB to 104GB

    Just hope I can get the 104GB down to 4GB

  • sbaker-757360 (3/16/2012)


    The ndf file where the old 300GB table will not shrink down below 52GB even though there is only 2GB of data left in the filegroup.

    Questions

    how many files are in the filegroup with the problem file?

    how many objects exist still in the filegroup?

    sbaker-757360 (3/16/2012)


    DBCC Shrinkfile (N'FileGroupName',1024) does not reclaim any space below 52GB

    The command above takes a logical file name as its input not a filegroup name!

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry,

    I was not clear in my original post. I tried to clear it up a bit in following posts.

    The 300GB Table that had mass deletes performed was copied to a different filegroup as a new 75GB table after I found it very difficult to rebuild and then shrink. (the rebuild was not that bad, but the shrink was painful on a test site, and would have taken days on a production system) A copy to a new file group with final sync was sooo much better. The original 300GB table was then dropped. I was then able to shrink the original ndf file down to 52GB using shrinkfile, which is where I am at now...stuck with only about 2GB of actual data in the original ndf file, but the ndf file is 52GB in size

  • sbaker-757360 (3/16/2012)


    Perry,

    I was not clear in my original post. I tried to clear it up a bit in following posts.

    The 300GB Table that had mass deletes performed was copied to a different filegroup as a new 75GB table after I found it very difficult to rebuild and then shrink. (the rebuild was not that bad, but the shrink was painful on a test site, and would have taken days on a production system) A copy to a new file group with final sync was sooo much better. The original 300GB table was then dropped. I was then able to shrink the original ndf file down to 52GB using shrinkfile, which is where I am at now...stuck with only about 2GB of actual data in the original ndf file, but the ndf file is 52GB in size

    yes i understand that, my last post was not clear i have modified it. Please re read it and post the answers to my questions, i'll see if i can help

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • The problem filegroup (set as default) which I will call Filegroup1 only had one ndf file in it. So all 50 table objects in the database are in FileGroup1File1 minus the big table that I moved to a new file group. Today, after rebuilding all 49 tables and running DBCC SHRINKFILE (N'FileGroup1File1' , 1024) a couple of times with no errors but no shrink beyond 52GB, I created a new file in FileGroup1 to perform an EMPTYFILE, which is now 92% complete.

  • If you can, for now I'd just move the remaining tables temporarily to another filegroup, completely emptying the original fg, then shrink it. Finally move the remaining tables back to their original fg.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • Ug. I was hoping I would not have to do that (move all tables to a new filegroup).

    The following finished:

    DBCC SHRINKFILE (N'FileGroup1File1' , EMPTYFILE)

    I then ran the following

    ALTER DATABASE MyDatabase REMOVE FILE FileGroup1File1;

    DBCC SHRINKFILE (N'FileGroup1File2',1024);

    DBCC SHRINKFILE (N'FileGroup1File2',1024);

    Unfortunately, I am still stuck with a 52GB ndf file.

    I checked for materialized views and did not find any....

    (My check consisted of looking for indexes on views and did not find any)

    Just wondering if there is anything else in the database that could be spread across many pages.

    Anyone know how to check for this?

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply