Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««7891011»»

Why SHRINKFILE is a very bad thing, and what to do about it. Expand / Collapse
Author
Message
Posted Friday, December 16, 2011 10:49 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Wednesday, July 2, 2014 8:40 AM
Points: 1,568, Visits: 666
Just want to add a comment on the sizing for Log files. For an 8000MB initial allocation (or growth increment) of the log file, you are going to have 16 * 500MB VLFs, for each allocation (initial or growth), this is going to cause uneven performance of T-Log backups. For a better discussion of VLFs and log files, I'm going to point you to Kimberley Tripps blog, she does an awesome job of explainaing Log Files, I recommend you read the whole series: http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx.

As a side note, the article was written to focus on Data Files, not Log files (though I really didn't explain that in the article - my mistake on that one). Log files are a very different animal than Data files, in how they are initialized, accessed, written to, etc. The upshot is that Log Files can be shrunk without the performance penalty incurred by shrinking Data files. Of course, shrinking files should only be done when absolutely necessary, and you should be aware of the consequences.



Post #1223236
Posted Friday, December 16, 2011 11:07 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Thursday, July 25, 2013 3:37 PM
Points: 125, Visits: 438
Simon Facer (12/16/2011)
Just want to add a comment on the sizing for Log files. For an 8000MB initial allocation (or growth increment) of the log file, you are going to have 16 * 500MB VLFs, for each allocation (initial or growth), this is going to cause uneven performance of T-Log backups. For a better discussion of VLFs and log files, I'm going to point you to Kimberley Tripps blog, she does an awesome job of explainaing Log Files, I recommend you read the whole series: http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx.



According to the link you provided, she also recommened exactly the same size. 8,000 MB. That is ideal for transaction logs on large databases. The performance is fine at that size. I would not describe it as uneven. Many different SQL Server sources have confirmed the 8,000 MB size for log files.

Quoted from your link:
To have a more ideally sized VLF, consider creating the transaction log in 8GB chunks (8GB, then extend it to 16GB, then extend it to 24GB and so forth) so that the number (and size) of your VLFs is more reasonable (in this case 512MB).


I also agree that the article was well written in regards to data files (MDFs). But the article really should have clarified that SHRINKFILE is perfectly appropriate to use in on Log files for a variety of reasons. When I first saw the title of the article in the newsletter today, I was all ready to jump into the debate and take the opposite side. Then I read all of the comments and noticed that most of what I would have written was already done last year when this came out.
Post #1223247
Posted Friday, December 16, 2011 11:14 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 3:52 AM
Points: 42,445, Visits: 35,501
JamesMorrison (12/16/2011)
But the article really should have clarified that SHRINKFILE is perfectly appropriate to use in on Log files for a variety of reasons.


It's not perfectly appropriate. The log file shouldn't be shrunk unless it definitely, absolutely does not need to be the size it is. Far too many people shrink for no good reason and do so on a regular basis and, especially with inappropriate autogrow settings, that is harmful.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1223248
Posted Friday, December 16, 2011 11:25 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Wednesday, July 2, 2014 8:40 AM
Points: 1,568, Visits: 666
James,

the quote from Kimberley's blog
In the end, the best amount is to allocate with a value of 8000MB instead of 8GB.
was actually related to the bug in SQL for Log file growth incrments of 4GB (i.e. exaclty 4096MB), not as a specific recommendation for Log File sizing. As with many settings in SQL Server, the only answer for Log File sizaing is 'it depends', you need to to do due dilligence in file sizing, if you have a 100MB database, a 8000MB Log file is inappropriate, likewise, if you have 100GB database, that has update activity in the 10MBs range / day with hourly-TLog backups, a 8000MB log file could be argued as inappropriate.
Set the Log file size based on projected update activity, or pick a percentage of the database size, if you don't have the stats on update activity. In my last job, we used to set the log to 25% of the data for the databases where we didnt have the stats.



Post #1223256
Posted Friday, December 16, 2011 11:46 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Thursday, July 25, 2013 3:37 PM
Points: 125, Visits: 438
GilaMonster (12/16/2011)
JamesMorrison (12/16/2011)
But the article really should have clarified that SHRINKFILE is perfectly appropriate to use in on Log files for a variety of reasons.


It's not perfectly appropriate. The log file shouldn't be shrunk unless it definitely, absolutely does not need to be the size it is. Far too many people shrink for no good reason and do so on a regular basis and, especially with inappropriate autogrow settings, that is harmful.


Christ, don't you read what other people write before you mouth off?
You seem more interested in just arguing instead of sharing info. It seems to a prevailing trend in your responses on every topic.
Just accept that you are wrong, but I don't think it is worth the effort to cite examples for why you are wrong. You are not open to listening on the topic.
Post #1223271
Posted Friday, December 16, 2011 11:57 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Thursday, July 25, 2013 3:37 PM
Points: 125, Visits: 438
Simon Facer (12/16/2011)


As with many settings in SQL Server, the only answer for Log File sizaing is 'it depends', you need to to do due dilligence in file sizing, if you have a 100MB database, a 8000MB Log file is inappropriate, likewise, if you have 100GB database, that has update activity in the 10MBs range / day with hourly-TLog backups, a 8000MB log file could be argued as inappropriate.


That is all very correct. It depends. And in these types of discussions I am sure we can all find the example to cite in opposition to any advice that anyone else could provide.

My comments on the recommended 8,000 MB size were more aimed at the larger databases in the 100+ GB size where performance issues start happening that require these micro level adjustments. In those smaller databases, these decisions on correct size of the log file in relation to the DB are far less critical. The hardware these days just makes those issues so minor as to be a waste of time to discuss. On a 1,000 MB database I don't waste any time thinking about those issues. I give it a 1,000 MB log file and forget about it. 1 GB of Netapp SAN space costs something like $12. We buy SAN space 5 TeraBytes at a time. It is barely worth debating performance on log files for databases that small. It is a rounding error.
Post #1223275
Posted Friday, December 16, 2011 12:07 PM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 3:52 AM
Points: 42,445, Visits: 35,501
JamesMorrison (12/16/2011)
GilaMonster (12/16/2011)
JamesMorrison (12/16/2011)
But the article really should have clarified that SHRINKFILE is perfectly appropriate to use in on Log files for a variety of reasons.


It's not perfectly appropriate. The log file shouldn't be shrunk unless it definitely, absolutely does not need to be the size it is. Far too many people shrink for no good reason and do so on a regular basis and, especially with inappropriate autogrow settings, that is harmful.


Christ, don't you read what other people write before you mouth off?


I did read what you said.
Yes, shrinking a log is far less harmful than shrinking data files. Yes, it is safe to do in Simple recovery (and in fact in all recovery models), as in it won't cause any problems and won't directly harm performance. Yes, if you have appropriate autogrow setting (which you have) it's even safer to do because then any further autogrows won't cause log fragmentation.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1223282
Posted Friday, December 16, 2011 3:31 PM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Monday, July 14, 2014 1:19 PM
Points: 323, Visits: 1,456
As others have said this is a really good discussion. I'm glad this article got reposted. A couple months ago we inherited a couple 2005 servers; one which was really tight on space. The storage arrays for this reporting server are maxed out drive bays so 1.5 tb is the limit.

The previous DBA put together I think a pretty ingenious script that shrinks down a given DB to 10% free in 1 gb increments.

RAISERROR(N'STARTING: USE [dbInetLog]; DBCC SHRINKFILE(dbInetLog_Secondary, 68362); ', 1, 1) WITH nowait;
USE [dbInetLog]; DBCC SHRINKFILE('dbInetLog_Secondary', 68362);
GO
RAISERROR(N'STARTING: USE [dbInetLog]; DBCC SHRINKFILE(dbInetLog_Secondary, 67362); ', 1, 1) WITH nowait;
USE [dbInetLog]; DBCC SHRINKFILE('dbInetLog_Secondary', 67362);
GO
RAISERROR(N'STARTING: USE [dbInetLog]; DBCC SHRINKFILE(dbInetLog_Secondary, 66362); ', 1, 1) WITH nowait;
USE [dbInetLog]; DBCC SHRINKFILE('dbInetLog_Secondary', 66362);
GO

etc... down to 10% free

I'm sure he did it this way so it would yield every so often in case user reports were being run against it.

Would you say this is better or worse than just taking it down to the target free space all at once?

And I'm not thrilled to be doing this at all after reading all this about fragmentation I had no choice but to do this each morning because another day's worth of data into this DW meant an autogrowth here or there and back to square 1, no free space.

I finally did get some space freed up so hopefully don't have to keep doing this.

Thanks,
Ken
Post #1223379
Posted Saturday, December 17, 2011 2:07 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Thursday, July 11, 2013 1:19 PM
Points: 31, Visits: 100
The article contradicts the title.

SHRINKFILE is not "a very bad thing" and the article confirms that. It is no different than any other tool in the arsenal of a DBA or developer. Use with caution.

If the title was meant to compel one to read the article it worked. I did for that very reason because I use SHRINKFILE extensively, without any of the problems that the author mentions, not suggesting they don't exist, however. I was curious to see if I missed something that over the course of thousands of SHRINKFILE applications just hasn't reared its ugly head.

I use SHRINKFILE with 1mb as the desired size, which means that SS would remove all unused space. I understand on the operational side a penalty is due when the server has to increase the size of the file again, so caution is a good playmate here.

Where SHRINKFILE becomes unusable, purely because of practical reasons, is with very large files. In my experience writing a script that renames the file and other objects related to it, such as indexes and constraints, SELECTing the entire file INTO a new file with the original name, recreating those attributes that SELECT INTO loses, creating the cluster index first, and then the other indexes on the new file, and dropping the original file (or keep it until one is sure that the operation is proven successful), and I’m done. On a 100 GB file I abandoned a straight SHRINKFILE operation after 36 hours, while I could successfully do the entire operation with the above approach in about 45 minutes.

One could question the wisdom of creating a new filegroup, as the author suggested, because databases with many filegroups must conform to a specific naming convention to allow automated scripts to deal with the large workload, such as sliding windows, backups, and so on. Creating a new filegroup would require a new, out of bound, naming convention and break the other processes that depend on the convention.

Just my thoughts.
Post #1223437
Posted Saturday, December 17, 2011 9:18 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Monday, March 26, 2012 6:38 AM
Points: 7, Visits: 129
I agree fully. This article should be called, STEPS TO TAKE TO PROPERLY USE SHRINKFILE. I have a database that we pre-size to a large value in case large amounts of data come in, to avoid auto-growth. Once the database has been used for its "event", we make a copy of the database, and save for viewing as archive. No longer an active database, it has no use for the extra space in its data filegroup files or in the trans log, as few transactions act on this database.

So I clone the database, then turn the cloned copy to simple transaction mode, then shrinkfile all files to the smallest they can go (DBCC SHRINKFILE, FileGroup, 1). Then I set autogrowth for all file groups to a reasonable low level (ALTER DATABASE db_name MODIFY FILE ( NAME = N'CacheData_1', FILEGROWTH = 25600KB ) ). Then I rebuild all indexes, which unfragments all indexes that have been fragmented with the shrinkfile command, and updates stats at 100%, so indexes are at prime readiness. The database I start with is often 30 GB large, and will shrink to 5 GB, so the disk savings (and therefore performance on the clone database) are significant. If you know what you are doing, and all the effects of SHRINKFILE, then the command is a useful and appropriate tool in the toolkit of a SQL Server DBA.

Brian
Post #1223484
« Prev Topic | Next Topic »

Add to briefcase «««7891011»»

Permissions Expand / Collapse