Printed 2017/08/16 08:05AM

Autogrow guidelines

By Steve Jones, 2011/12/02

I always enable autogrow on my databases. However, it’s there for emergencies, not as a space management tool. I monitor disk space, and I grow my files manually as space runs low. I want to control the growth, but in the event of a runaway process or some unexpected event, I want autogrow enabled to hopefully prevent a database crash.

What level of autogrow do you enable? That was a question I saw recently and it made me stop and think a bit. I asked the question on Twitter, but got very few responses and no real guidance from others.

In my mind, you want to enable a specific level of autogrow that will be likely to handle something unexpected in a single, or maybe a couple growths.

The two options for autogrowth are:

In general I think a fixed size growth is the setting to choose. This offers more control and as your database size grows, it’s unlikely you want to grow at a percentage. A 10% growth of a 100MB database is 10MB, almost a rounding error on many of today’s drives.

However a 10% growth on a 2TB database is 200GB, which could easily exceed the free space on the drives that make up a file,which is where you set the autogrow specifications:


As the file size increases, the disk space goes down, but the percentage growth goes up as well, exacerbating the problem.

So what are the guidelines?

I think that you have to look at the data growth and each database and what is possible for data growth in that database. Some some database that does ETL work I think it’s more likely you could have an unexpected large or duplicate import in a process that would cause larger growth.

However the question I was asked talked about setting a guideline for hundreds of databases, where in the short term it’s impractical to review every database. For those, I offer up these guidelines, though I’m happy to have someone give better guidance.

MDF Size File Autogrowth
< 1GB 100MB
1GB < mdf size < 50GB 2GB
50GB < mdf size < 200GB 5GB
200GB < mdf < 1TB 10GB
> 1TB 50GB


These are guesses on my part, based on some experience, but the largest database I ever managed was a 600GB one, so I’m guessing on the TB scale.

I’d welcome your comments and experience in this area.

Filed under: Blog Tagged: administration, sql server, syndicated
Copyright © 2002-2017 Redgate. All Rights Reserved. Privacy Policy. Terms of Use. Report Abuse.