DBCC Shrinkfile on data file with Availability Group - Is it safe?

  • I need to shrink a data file on the primary replica of an availability group.  The limitations surrounding AGs and shrinking log files, as well as the workarounds, are well documented.  Thus my concern.  I'm not finding reports of any similar issues when it comes to shrinking data files though.

    Is it safe to shrink the data file for an availability group without any additional steps, such as those that are required when shrinking log files in an AG?

    My current SQL script incrementally shrinks the data until it reaches the target size so I shouldn't need to worry about blowing up disk, or at least I can monitor it closely and kill the script if need be.   I tried it on a small, no-activity test database with AG, and it worked fine.  I haven't tried it on my 2.5TB production database however, or anything even close to that.

    Thanks.

  • Nothing particular with always on - it will generate a lot of IO, can cause blocking and will result in fragmentation whether in an availability group or not. The secondary could fall behind and that's the only thing I can think of that would be specific to availability groups. If you have a script for shrinking data files, you may be shrinking too often. Maybe not and you could have some business practices that lead to the issues but it's not something you want to do regularly, if ever.

    Sue

  • I appreciate your response Sue.

    I am a little perplexed by "it's not something you want to do regularly, if ever".  Postgres uses VACUUM to free up or remove dead tuples created from UPDATEs and DELETEs.  It improves performance and frees up disk space.

    This is the first time I will be shrinking the data file in SQL Server.  I am estimating it will free up ~500GB.  Seems kind of useful, and much needed in this particular case.

    Thanks.

  • The two different database platforms work a lot differently. I've seen vacuum compared to shrinkfile, defragging indexes and ghost cleanup. All three are very different and I think you see all three come up in comparison as there is no similar process in SQL Server. It's just not the same and how things work internally with the different platforms is very different. Not one better than the other, just different.

    When shrinking data files in SQL Server, it usually results in fragmentation. Clean up the fragmentation, then you have free space. Clean up the free space then you have the fragmentation...so you can get in a cycle like that. I think one of the more important pieces is to understand why or how you end up with a lot of space in a data file. It could be there is a monthly archiving or purging process that runs. In that case the file likely grows during the month before the cleanup. If there is not space available during the growths, the file has to grow in size which can be a waste of resources if you get in the habit of shrinking and growing every month. In that scenario, it's generally better to just leave the file with the space it needs throughout the month.

    But you will want to check and address index fragmentation after doing the shrink in SQL Server.

    Sue

  • Well this certainly conflicts with my original plan of 1) Rebuild fragmented indexes 2) Shrink log file 3) Shrink data file.

    Thanks for the detailed explanation.

  • Your welcome. FYI - This article is a good read on the subject and getting into that problem I mentioned with shrink, defrag, shrink, etc:

    Shrinking databases and rebuilding indexes is a vicious cycle.

    Sue

  • I am wondering why you think freeing up 500GB of a 2.5TB database is actually useful.  That extra space is going to be utilized by SQL Server for various operations.

    For example - if you rebuild an index online...you need at least the size of that index available as free space for the index rebuild to be performed.  If you rebuild a table that consumes 250GB of space but there is only 100GB of available space in the database then the data file has to grow to accommodate that operation.

    Future growth is also a concern - you want at least 6 months of space available (depending on how long it takes to purchase more storage) between the drives and space available for that growth.  If your system is growing at 5GB per day, then 500GB isn't going to be enough space.  However, if your system is only growing at 1GB per day - then 500GB is more than a year and is plenty of available space.

    500GB is less than 20% of the total size of the database - I usually start looking to extend my data files when I get down to 15% so that isn't much more than my limit and is a reasonable amount of available space.  It does depend on that database and the usage - of course - but is a generalized starting point for my systems.

     

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply