shrinkfile option not working

  • Total size of Tables=500 GB

    Storage of DB shows 2.6 TB, Microsoft recommends run dbcc shrinkfile  with low priority , It is not doing anything ,any ideas?

    This is prod db , on 24 x7 , using Azure MI Business Critical

  • what exact command are you using for it?

  • Also what is the "log_reuse_wait_desc" from sys.databases for the database in question?

  • Ant-Green wrote:

    Also what is the "log_reuse_wait_desc" from sys.databases for the database in question?

    I may be wrong but this would only prevent LOG file shrink - not data file (which I assumed is what the OP is trying to do)

  • Correct, original question is a bit open to interpretation as to what is going on.

    500GB in table but DB is 2.6TB, is that a 600GB MDF and a 2TB LDF, more information needed ideally

  • datafile= 3 TB

    logfile =200 MB

    But size of all Tables =500 GB, what is taking 3 TB space, I need to free up the storage , to save $$$ ( storage on Azure MI ).

    DBCC SHRINKFILE WITH LOW PRIORITY just keeps thinking for 2-3 hours and does not do anything!

  • it can take days to shrink depending on how busy the db is.

    and I would expand that command - prefer to be specific about which files get truncated and to what size.

    following command should give you the name of the file to shrink - you may have more than one and if so you do the shrink for each file that needs it

    SELECT file_id,
    name,
    CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8 / 1024. AS space_used_mb,
    CAST(size AS bigint) * 8 / 1024. AS space_allocated_mb,
    CAST(max_size AS bigint) * 8 / 1024. AS max_file_size_mb
    FROM sys.database_files
    WHERE type_desc IN (
    'ROWS'
    --,'LOG' -- excluding it as your size is rather small
    );

    the output of the above will give you a name , and a space_used_mb

    use these on the following

    --shrink_to_value below should be at least the space_used_mb * 1.2 (e.g. add 20% of existing space to it

    DBCC SHRINKFILE ('enter_name_from_above', shrink_to_value) WITH WAIT_AT_LOW_PRIORITY (ABORT_AFTER_WAIT = SELF);

    but it will take long to finish

     

  • It may be quicker to create a new database with the correct structure and size, and then copy data from the large DB to the correctly sized DB, and rename the two databases, you would need a maintenance window to do a delta load and switch the names around but generally when talking large shrinks, a new DB and copy can actually be a lot quicker to perform.

  • Thank you! This is the only solution and best I think !

    can it be done in batches, say every weekend for a month?

  • Creating new db wont work, as there are 100's of tables, and millions of rows and its a live DB being used.

    I would say its a microsoft defect expecting us to run dbcc shrinkfile , to release space to the OS  , that too its in  suspended state ,

    That too this on Azure MI, Business critical, most expensive and latest/ greatest version of SQL Server!

  • you can try shrink in 100 or 200 GB at the time - not advisable but would work -

    the truncateonly is here in case you get lucky and there is a block on the files that is empty at the end - if that happens it will "remove" those faster than the normal shrink - and the shrinks following it may fail if the size specified is greater than the current size - but it will be faster on those.

    DBCC SHRINKFILE ('enter_name_from_above', TRUNCATEONLY);

    DBCC SHRINKFILE ('enter_name_from_above', 2800000) WITH WAIT_AT_LOW_PRIORITY (ABORT_AFTER_WAIT = SELF);
    DBCC SHRINKFILE ('enter_name_from_above', TRUNCATEONLY);
    DBCC SHRINKFILE ('enter_name_from_above', 2600000) WITH WAIT_AT_LOW_PRIORITY (ABORT_AFTER_WAIT = SELF);
    DBCC SHRINKFILE ('enter_name_from_above', TRUNCATEONLY);
    DBCC SHRINKFILE ('enter_name_from_above', 2400000) WITH WAIT_AT_LOW_PRIORITY (ABORT_AFTER_WAIT = SELF);
    DBCC SHRINKFILE ('enter_name_from_above', TRUNCATEONLY);
    DBCC SHRINKFILE ('enter_name_from_above', 2200000) WITH WAIT_AT_LOW_PRIORITY (ABORT_AFTER_WAIT = SELF);
    DBCC SHRINKFILE ('enter_name_from_above', TRUNCATEONLY);


    and so on until size is acceptable.

     

  • be aware that depending on which MI SKU you have, that shrinking the database may reduce the disk performance available to the instance. I suspect the database is the size that it is for a reason.

  • You can just shrink it a bit at a time. So instead of trying to shrinking it from 2.6 TB to 500 GB in one go just try making it 50 GB smaller than it's current size. If that works in a reasonable amount of time then try shrinking by a bit more.

  • Just wanted to add that this isn't a "microsoft defect" as it is the job of the DBA to make sure things are properly sized. I wouldn't blame DELL if the laptop I bought had a 1 TB SSD and I filled it up. That's my fault for not buying enough disk OR for downloading too much junk. If the DBA setting this up thought that 2.6 TB is how much disk you needed and you only needed 500 GB, that is a mistake of the DBA, not a mistake on Microsoft's part.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Its managed SQL Server Business critical, with the latest version, and MS automated management of Log file.

    The issue here is there is purge and archival of data creating lots of free space within the files , but it needs a manual cleanup by running dbcc shrinkfile which is irksome !

    if they automated Always ON etc in Business critical, this should be a piece of cake!

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply