Why SHRINKFILE is a very bad thing, and what to do about it.

  • Just want to add a comment on the sizing for Log files. For an 8000MB initial allocation (or growth increment) of the log file, you are going to have 16 * 500MB VLFs, for each allocation (initial or growth), this is going to cause uneven performance of T-Log backups. For a better discussion of VLFs and log files, I'm going to point you to Kimberley Tripps blog, she does an awesome job of explainaing Log Files, I recommend you read the whole series: http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx.

    As a side note, the article was written to focus on Data Files, not Log files (though I really didn't explain that in the article - my mistake on that one). Log files are a very different animal than Data files, in how they are initialized, accessed, written to, etc. The upshot is that Log Files can be shrunk without the performance penalty incurred by shrinking Data files. Of course, shrinking files should only be done when absolutely necessary, and you should be aware of the consequences.

  • Simon Facer (12/16/2011)


    Just want to add a comment on the sizing for Log files. For an 8000MB initial allocation (or growth increment) of the log file, you are going to have 16 * 500MB VLFs, for each allocation (initial or growth), this is going to cause uneven performance of T-Log backups. For a better discussion of VLFs and log files, I'm going to point you to Kimberley Tripps blog, she does an awesome job of explainaing Log Files, I recommend you read the whole series: http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx.

    According to the link you provided, she also recommened exactly the same size. 8,000 MB. That is ideal for transaction logs on large databases. The performance is fine at that size. I would not describe it as uneven. Many different SQL Server sources have confirmed the 8,000 MB size for log files.

    Quoted from your link:

    To have a more ideally sized VLF, consider creating the transaction log in 8GB chunks (8GB, then extend it to 16GB, then extend it to 24GB and so forth) so that the number (and size) of your VLFs is more reasonable (in this case 512MB).

    I also agree that the article was well written in regards to data files (MDFs). But the article really should have clarified that SHRINKFILE is perfectly appropriate to use in on Log files for a variety of reasons. When I first saw the title of the article in the newsletter today, I was all ready to jump into the debate and take the opposite side. Then I read all of the comments and noticed that most of what I would have written was already done last year when this came out.

  • JamesMorrison (12/16/2011)


    But the article really should have clarified that SHRINKFILE is perfectly appropriate to use in on Log files for a variety of reasons.

    It's not perfectly appropriate. The log file shouldn't be shrunk unless it definitely, absolutely does not need to be the size it is. Far too many people shrink for no good reason and do so on a regular basis and, especially with inappropriate autogrow settings, that is harmful.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • James,

    the quote from Kimberley's blog

    In the end, the best amount is to allocate with a value of 8000MB instead of 8GB.

    was actually related to the bug in SQL for Log file growth incrments of 4GB (i.e. exaclty 4096MB), not as a specific recommendation for Log File sizing. As with many settings in SQL Server, the only answer for Log File sizaing is 'it depends', you need to to do due dilligence in file sizing, if you have a 100MB database, a 8000MB Log file is inappropriate, likewise, if you have 100GB database, that has update activity in the 10MBs range / day with hourly-TLog backups, a 8000MB log file could be argued as inappropriate.

    Set the Log file size based on projected update activity, or pick a percentage of the database size, if you don't have the stats on update activity. In my last job, we used to set the log to 25% of the data for the databases where we didnt have the stats.

  • GilaMonster (12/16/2011)


    JamesMorrison (12/16/2011)


    But the article really should have clarified that SHRINKFILE is perfectly appropriate to use in on Log files for a variety of reasons.

    It's not perfectly appropriate. The log file shouldn't be shrunk unless it definitely, absolutely does not need to be the size it is. Far too many people shrink for no good reason and do so on a regular basis and, especially with inappropriate autogrow settings, that is harmful.

    Christ, don't you read what other people write before you mouth off?

    You seem more interested in just arguing instead of sharing info. It seems to a prevailing trend in your responses on every topic.

    Just accept that you are wrong, but I don't think it is worth the effort to cite examples for why you are wrong. You are not open to listening on the topic.

  • Simon Facer (12/16/2011)


    As with many settings in SQL Server, the only answer for Log File sizaing is 'it depends', you need to to do due dilligence in file sizing, if you have a 100MB database, a 8000MB Log file is inappropriate, likewise, if you have 100GB database, that has update activity in the 10MBs range / day with hourly-TLog backups, a 8000MB log file could be argued as inappropriate.

    That is all very correct. It depends. And in these types of discussions I am sure we can all find the example to cite in opposition to any advice that anyone else could provide.

    My comments on the recommended 8,000 MB size were more aimed at the larger databases in the 100+ GB size where performance issues start happening that require these micro level adjustments. In those smaller databases, these decisions on correct size of the log file in relation to the DB are far less critical. The hardware these days just makes those issues so minor as to be a waste of time to discuss. On a 1,000 MB database I don't waste any time thinking about those issues. I give it a 1,000 MB log file and forget about it. 1 GB of Netapp SAN space costs something like $12. We buy SAN space 5 TeraBytes at a time. It is barely worth debating performance on log files for databases that small. It is a rounding error. šŸ˜›

  • JamesMorrison (12/16/2011)


    GilaMonster (12/16/2011)


    JamesMorrison (12/16/2011)


    But the article really should have clarified that SHRINKFILE is perfectly appropriate to use in on Log files for a variety of reasons.

    It's not perfectly appropriate. The log file shouldn't be shrunk unless it definitely, absolutely does not need to be the size it is. Far too many people shrink for no good reason and do so on a regular basis and, especially with inappropriate autogrow settings, that is harmful.

    Christ, don't you read what other people write before you mouth off?

    I did read what you said.

    Yes, shrinking a log is far less harmful than shrinking data files. Yes, it is safe to do in Simple recovery (and in fact in all recovery models), as in it won't cause any problems and won't directly harm performance. Yes, if you have appropriate autogrow setting (which you have) it's even safer to do because then any further autogrows won't cause log fragmentation.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • As others have said this is a really good discussion. I'm glad this article got reposted. A couple months ago we inherited a couple 2005 servers; one which was really tight on space. The storage arrays for this reporting server are maxed out drive bays so 1.5 tb is the limit.

    The previous DBA put together I think a pretty ingenious script that shrinks down a given DB to 10% free in 1 gb increments.

    RAISERROR(N'STARTING: USE [dbInetLog]; DBCC SHRINKFILE(dbInetLog_Secondary, 68362); ', 1, 1) WITH nowait;

    USE [dbInetLog]; DBCC SHRINKFILE('dbInetLog_Secondary', 68362);

    GO

    RAISERROR(N'STARTING: USE [dbInetLog]; DBCC SHRINKFILE(dbInetLog_Secondary, 67362); ', 1, 1) WITH nowait;

    USE [dbInetLog]; DBCC SHRINKFILE('dbInetLog_Secondary', 67362);

    GO

    RAISERROR(N'STARTING: USE [dbInetLog]; DBCC SHRINKFILE(dbInetLog_Secondary, 66362); ', 1, 1) WITH nowait;

    USE [dbInetLog]; DBCC SHRINKFILE('dbInetLog_Secondary', 66362);

    GO

    etc... down to 10% free

    I'm sure he did it this way so it would yield every so often in case user reports were being run against it.

    Would you say this is better or worse than just taking it down to the target free space all at once?

    And I'm not thrilled to be doing this at all after reading all this about fragmentation :ermm: I had no choice but to do this each morning because another day's worth of data into this DW meant an autogrowth here or there and back to square 1, no free space.

    I finally did get some space freed up so hopefully don't have to keep doing this.

    Thanks,

    Ken

  • The article contradicts the title.

    SHRINKFILE is not "a very bad thing" and the article confirms that. It is no different than any other tool in the arsenal of a DBA or developer. Use with caution.

    If the title was meant to compel one to read the article it worked. I did for that very reason because I use SHRINKFILE extensively, without any of the problems that the author mentions, not suggesting they don't exist, however. I was curious to see if I missed something that over the course of thousands of SHRINKFILE applications just hasn't reared its ugly head.

    I use SHRINKFILE with 1mb as the desired size, which means that SS would remove all unused space. I understand on the operational side a penalty is due when the server has to increase the size of the file again, so caution is a good playmate here.

    Where SHRINKFILE becomes unusable, purely because of practical reasons, is with very large files. In my experience writing a script that renames the file and other objects related to it, such as indexes and constraints, SELECTing the entire file INTO a new file with the original name, recreating those attributes that SELECT INTO loses, creating the cluster index first, and then the other indexes on the new file, and dropping the original file (or keep it until one is sure that the operation is proven successful), and Iā€™m done. On a 100 GB file I abandoned a straight SHRINKFILE operation after 36 hours, while I could successfully do the entire operation with the above approach in about 45 minutes.

    One could question the wisdom of creating a new filegroup, as the author suggested, because databases with many filegroups must conform to a specific naming convention to allow automated scripts to deal with the large workload, such as sliding windows, backups, and so on. Creating a new filegroup would require a new, out of bound, naming convention and break the other processes that depend on the convention.

    Just my thoughts.

  • I agree fully. This article should be called, STEPS TO TAKE TO PROPERLY USE SHRINKFILE. I have a database that we pre-size to a large value in case large amounts of data come in, to avoid auto-growth. Once the database has been used for its "event", we make a copy of the database, and save for viewing as archive. No longer an active database, it has no use for the extra space in its data filegroup files or in the trans log, as few transactions act on this database.

    So I clone the database, then turn the cloned copy to simple transaction mode, then shrinkfile all files to the smallest they can go (DBCC SHRINKFILE, FileGroup, 1). Then I set autogrowth for all file groups to a reasonable low level (ALTER DATABASE db_name MODIFY FILE ( NAME = N'CacheData_1', FILEGROWTH = 25600KB ) ). Then I rebuild all indexes, which unfragments all indexes that have been fragmented with the shrinkfile command, and updates stats at 100%, so indexes are at prime readiness. The database I start with is often 30 GB large, and will shrink to 5 GB, so the disk savings (and therefore performance on the clone database) are significant. If you know what you are doing, and all the effects of SHRINKFILE, then the command is a useful and appropriate tool in the toolkit of a SQL Server DBA.

    Brian

  • You must mean GilaMonster.

  • Ken -

    I would say that either method of Shrinking the file (in 1GB increments or all in one go) will have just as negative impact of data fragmentation; as you said your script does cause the Shrink process to yield and allow other processes to run, so it is arguably the lesser of the two evils. It seems like you are in a tough situation with your space, of course I dont know your situation in detail, but a couple of Data Warehouse items spring to mind - (1) can you purge older data? (2) can you use compression to help free space? Compression is much better in 2008 than 2005, but it could still be an option for you, see KT's blog : http://www.sqlskills.com/BLOGS/KIMBERLY/post/SQL-Server-2005-and-2008-Compression.aspx, an older post, but still relevant; also a Bing search on 'data compression sql server 2005' will give you other resources.

    Nico,

    Yes, the title was intended to be a teaser to read the article, and I'm glad you read it because of the title. But the title is correct, Shrinkfile IS a very bad thing to do to a database, IF you don't understand the consequences and how to mitigate the performance impact. I would be very surprised if you havent incurred performance impacts from Shrinking your database files. Renaming your files and recreating the objects seems like a lot of work to me, but if it works for you, then its a viable solution. The FileGroup option was just that - just an option, if it doesn't fit with your environment for whatever reason, then it's not a workable solution.

    Brian,

    You do seem to have a reasonable process worked out for your situation. And, you are right, ShrinkFile is a valid tool in the DBA toolbox, if you understand and handle the consequences, just don't abuse it or it will bite you.

  • Here we go again in 2012, double cooked pork. Paul said it will cause fragmentation a long time ago. I said thou should defrag a long time ago.

    The title should be "DBCC Shrinkfile Best Practice".

    1. index defrag, file defrag

    2. size it correctly (to the best you can)

    3. partition and archive

    ....

    DBCC Shinkfile is a double-edged sword. It exists for a reason, just like your congress exists or political parties exist.

    Jason
    http://dbace.us
    šŸ˜›

  • Hi All,

    I've got a question regarding shrink file and index rebuid/reorganize. We have inherited HUGE database with one file (ca. 1TB). Indexes are fragmented massively, and, as you presume, after running rebuild against this db we end up with 1,5 TB monster. Is there any way to reduce size of this database ? I was thinking about new file, then running index rebuild on database that is already well defragmented (1,5TB one). How will new file react; will it grow a lot? Could we shring old file without causing index fragmenation after such action ?

    Thanks for all the answers !! šŸ™‚

  • piotr, for future reference. you will get a better, more focused responses if you start your own new thread.

    A shrink of a data file will always fragment that file, so running one after a index rebuild is self defeating. The fact that the file is now larger after the rebuild has no effect on performance, if you have the space for that file leave it as it is.

    Another rebuild won't make the file grow again as it now has the space within the file to accommodate the sorting. this is another reason to leave the file as it is, if you shrink it will just have to grow again when you rebuild.

    ---------------------------------------------------------------------

Viewing 15 posts - 91 through 105 (of 109 total)

You must be logged in to reply to this topic. Login to reply