shrinkfile option not working

  • what exact command are you using for it?

  • Also what is the "log_reuse_wait_desc" from sys.databases for the database in question?

  • Ant-Green wrote:

    Also what is the "log_reuse_wait_desc" from sys.databases for the database in question?

    I may be wrong but this would only prevent LOG file shrink - not data file (which I assumed is what the OP is trying to do)

  • Correct, original question is a bit open to interpretation as to what is going on.

    500GB in table but DB is 2.6TB, is that a 600GB MDF and a 2TB LDF, more information needed ideally

  • datafile= 3 TB

    logfile =200 MB

    But size of all Tables =500 GB, what is taking 3 TB space, I need to free up the storage , to save $$$ ( storage on Azure MI ).

    DBCC SHRINKFILE WITH LOW PRIORITY just keeps thinking for 2-3 hours and does not do anything!

  • it can take days to shrink depending on how busy the db is.

    and I would expand that command - prefer to be specific about which files get truncated and to what size.

    following command should give you the name of the file to shrink - you may have more than one and if so you do the shrink for each file that needs it

    SELECT file_id,
    name,
    CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8 / 1024. AS space_used_mb,
    CAST(size AS bigint) * 8 / 1024. AS space_allocated_mb,
    CAST(max_size AS bigint) * 8 / 1024. AS max_file_size_mb
    FROM sys.database_files
    WHERE type_desc IN (
    'ROWS'
    --,'LOG' -- excluding it as your size is rather small
    );

    the output of the above will give you a name , and a space_used_mb

    use these on the following

    --shrink_to_value below should be at least the space_used_mb * 1.2 (e.g. add 20% of existing space to it

    DBCC SHRINKFILE ('enter_name_from_above', shrink_to_value) WITH WAIT_AT_LOW_PRIORITY (ABORT_AFTER_WAIT = SELF);

    but it will take long to finish

     

  • It may be quicker to create a new database with the correct structure and size, and then copy data from the large DB to the correctly sized DB, and rename the two databases, you would need a maintenance window to do a delta load and switch the names around but generally when talking large shrinks, a new DB and copy can actually be a lot quicker to perform.

  • Thank you! This is the only solution and best I think !

    can it be done in batches, say every weekend for a month?

  • Creating new db wont work, as there are 100's of tables, and millions of rows and its a live DB being used.

    I would say its a microsoft defect expecting us to run dbcc shrinkfile , to release space to the OS  , that too its in  suspended state ,

    That too this on Azure MI, Business critical, most expensive and latest/ greatest version of SQL Server!

  • you can try shrink in 100 or 200 GB at the time - not advisable but would work -

    the truncateonly is here in case you get lucky and there is a block on the files that is empty at the end - if that happens it will "remove" those faster than the normal shrink - and the shrinks following it may fail if the size specified is greater than the current size - but it will be faster on those.

    DBCC SHRINKFILE ('enter_name_from_above', TRUNCATEONLY);

    DBCC SHRINKFILE ('enter_name_from_above', 2800000) WITH WAIT_AT_LOW_PRIORITY (ABORT_AFTER_WAIT = SELF);
    DBCC SHRINKFILE ('enter_name_from_above', TRUNCATEONLY);
    DBCC SHRINKFILE ('enter_name_from_above', 2600000) WITH WAIT_AT_LOW_PRIORITY (ABORT_AFTER_WAIT = SELF);
    DBCC SHRINKFILE ('enter_name_from_above', TRUNCATEONLY);
    DBCC SHRINKFILE ('enter_name_from_above', 2400000) WITH WAIT_AT_LOW_PRIORITY (ABORT_AFTER_WAIT = SELF);
    DBCC SHRINKFILE ('enter_name_from_above', TRUNCATEONLY);
    DBCC SHRINKFILE ('enter_name_from_above', 2200000) WITH WAIT_AT_LOW_PRIORITY (ABORT_AFTER_WAIT = SELF);
    DBCC SHRINKFILE ('enter_name_from_above', TRUNCATEONLY);


    and so on until size is acceptable.

     

  • be aware that depending on which MI SKU you have, that shrinking the database may reduce the disk performance available to the instance. I suspect the database is the size that it is for a reason.

  • You can just shrink it a bit at a time. So instead of trying to shrinking it from 2.6 TB to 500 GB in one go just try making it 50 GB smaller than it's current size. If that works in a reasonable amount of time then try shrinking by a bit more.

  • Just wanted to add that this isn't a "microsoft defect" as it is the job of the DBA to make sure things are properly sized. I wouldn't blame DELL if the laptop I bought had a 1 TB SSD and I filled it up. That's my fault for not buying enough disk OR for downloading too much junk. If the DBA setting this up thought that 2.6 TB is how much disk you needed and you only needed 500 GB, that is a mistake of the DBA, not a mistake on Microsoft's part.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Its managed SQL Server Business critical, with the latest version, and MS automated management of Log file.

    The issue here is there is purge and archival of data creating lots of free space within the files , but it needs a manual cleanup by running dbcc shrinkfile which is irksome !

    if they automated Always ON etc in Business critical, this should be a piece of cake!

  • sqlguy80 wrote:

    Its managed SQL Server Business critical, with the latest version, and MS automated management of Log file.

    The issue here is there is purge and archival of data creating lots of free space within the files , but it needs a manual cleanup by running dbcc shrinkfile which is irksome !

    if they automated Always ON etc in Business critical, this should be a piece of cake!

    The first thing I'd do, in that case, is drop the none clustered indexes on your 3 or 4 largest tables, rebuild the clustered index on those to move them to a new file group/file (1 for each table for future planning on space maintenance), and then try a shrink with the truncate option.  Don't forget to rebuild your non-clustered indexes for each table on the same file group as the table they belong to.  And exception there will be any indexes that don't fragment over time.

    A lot of this will go to hell in a handbasket if the table contains LOBs that went out of row or any row overflows.

    I've also not done any work in Azure regardless of type so I don't know if you actually CAN even add file groups.

    The other thing is that none of the dmvs or other reporting sources are updated until DBCC SHRINK(file or database) actually completes.  If you interrupt it, that won't fix that problem.  That's why you need to shrink in smaller chunks... so that you can actually see some progress after each DBCC SHRINKxxxxx completes.

    And I never use the shrink database option because it offers a whole lot less control over things and does things by % rather than specified size.

    As for having a bit of a fit over the "if they yada, yada, yada", it's a complete waste of time and not worth the rise in blood pressure or having a stroke over.  😀

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply