Files in SQL Server need to grow as the database grows, and in very specific circumstances need to be shrunk as well with additional maintenance. There is quite a bit different between log files and data files, so I created a separate post for Shrinking Database Data Files.
When should you shrink log files
First, if a large process that will never happen again caused your log file to grow to a ridiculous size then it’s reasonable to shrink the file down to about 150% of the most space you expect to use in the next year. Cleaning this up will also cut down on restore times.
Second, if your log file grew in small increments then you’re left with a large number of VLFs like I talked about in file growths. The only way to get rid of these is to shrink your log files down then have them grow in larger increments. When I do this, I’m also growing them manually immediately afterwards in the same script.
What happens when you shrink log files
The log file will find a group of empty VLFs and delete them. The end, your file is shrunk.
Of course there’s a little more to it than that. The parts I’m most worried about are why did you need that space in the first place, why do you need the space freed up, and what happens when you need it again?
To understand why you needed it we need to talk about what a log file really is for a second. It is every change made to your database since the last point-in-time where it didn’t need all the changes saved off. This is typically either the oldest open transaction (so it can roll back any change if the transaction fails) or, if your database is in full or bulk-logged recovery, the last time you ran a log backup It can also be due to other functionality such as database mirroring, but you typically have an experienced DBA on hand for that.
Why is this common to do?
There is typically at least once in every large database’s life where the log needs to be shrunk for one of a couple reasons.
Going off of the two reasons above, the first two are because a large process ran that won’t run again. The third reason I go over here is because of small VLFs.
1. Log backups weren’t set up
The first large process is when a database is set up in full or bulk-logged recovery, but the log backups weren’t set up until the drive filled. This is because the log needs to be backed up before it can be flushed, which means it will grow until it runs out of space. It’s common to miss this when the backup plan needs you to specify which databases to back up, as opposed to doing all databases. Once you take the first log backup it leaves you with a huge log file where you don’t expect to hit that size again, and it’s ok to shrink it.
2. Runaway transaction
The second large process is when someone runs something that didn’t go as planned. A transaction, even an implied transaction for a single statement, stayed open forever, or close enough to forever to cause us a headache. This caused a ton of excessive growth in the log files, and that space will only ever be used when someone says “oops” again. In this case, evaluate if you’ll need that space for another process before you shrink because someone will say “oops” again someday. If the drive is dedicated to log files and you only have one large database, consider shrinking it just enough to clear the warnings in your monitoring software.
3. VLF cleanup
The final reason is because you have too many VLFs. When a log file is created or grows it splits up the new space into smaller chunks called Virtual Log Files that SQL Server will cycle through when writing logs. If the database is growing in small increments it creates tons of these, slowing SQL Server down. The point here is that you resolve the issue by shrinking the log (delete smaller VLFs), then grow the log to the desired size in larger chunks (create bigger VLFs). The details on finding what size your VLFs are, what size new VLFs will be, and just about everything else VLF related is in my post Database Log VLFs.
What happens when log files grow?
If you’re going to shrink a log file then you’re increasing the chances that it will have to grow again, so we need to talk about growth for a second.
When a data file grows it can often take advantage of instant file initialization where it just reserves unallocated space on the disk, making it a quick process. Without this being turned on or if you’re using Transparent Data Encryption, SQL Server would have to zero out the disks to get rid of data that used to be there. Kimberly Tripp (b|t) goes into a lot more details than this in her post Instant Initializations – What, Why and How?.
Log files can’t use this feature. They write those 0’s every time the log grows, and SQL Server is patient enough to wait for that to happen. Here’s how that went over the last time I overheard one of my users “discussing” this with my biggest server.
SQL Server: My log’s full, hold on while I get some more space.
User: I don’t care, I just want my query to finish.
SQL Server: I’m told to grab 8,000 MB at a time, please be patient.
User: Not a problem, there’s 8,000 MB right there, please hurry.
SQL Server: I only use clean space, hold on while I ask the OS to write about 8,388,608,000 zeroes for me. This isn’t a problem, I’m patient.
User: What? NO! Well, at least that gives me enough time to open a ticket for the DBA.
Ticketing System: Hello again, I haven’t seen you for several minutes.
The moral of this story is that you don’t want autogrowth to happen. You want it turned on for emergencies and one-time growths, but it’s better to grow manually in a maintenance window. Here’s how that maintenance window conversation goes:
DBA: I see you’ve hit 90% utilization on your log a couple times, how about we add 8,000 MB.
SQL Server: Oh, I didn’t notice, but I can add that space now. I only use clean space, hold on while I ask the OS to write about 8,388,608,000 zeroes for me. This isn’t a problem, I’m patient.
DBA: There’s no rush, everyone can use the existing log space while you work on that.
Ticketing System: Zzzzzzz…..
The 8,000 MB I used here isn’t a made-up number, it’s a best practice due to VLF sizes on larger databases. Most DBAs set autogrowth to less than 8,000 because they’d rather deal with smaller VLFs than irate users, and I’d recommend 1,000 MB for autogrowth on larger databases. To be fair, that number drops to 500 MB starting in SQL 2012, but that’s still a lot of waiting during peak hours.
On smaller databases you aren’t going to want to grow log files by 8,000 MB off hours, or even 1,000 MB for autogrowth. The growth should be something rather significant for the database yet size appropriate, and I typically like to do close to 50% of the current file size. So, if I have a 500 MB log, I’ll set it to 250 MB autogrowth and do manual growths of the same size. For smaller databases where 50% log growth would be less than 100 MB, I’m not always worried about manually growing the log files since there are diminishing returns on my time and effort.
My post Database Log VLFs gets into the details of why those are best practices and what it means to you.
Tim Ford (b|t) noticed how we moved to advanced topics and asked us to remember where we came from in his Entry-Level Content Challenge. This is my second post to take on his challenge, and I hope others will take him up on this!