SQL Clone
SQLServerCentral is supported by Redgate
Log in  ::  Register  ::  Not logged in

Autogrow guidelines

I always enable autogrow on my databases. However, it’s there for emergencies, not as a space management tool. I monitor disk space, and I grow my files manually as space runs low. I want to control the growth, but in the event of a runaway process or some unexpected event, I want autogrow enabled to hopefully prevent a database crash.

What level of autogrow do you enable? That was a question I saw recently and it made me stop and think a bit. I asked the question on Twitter, but got very few responses and no real guidance from others.

In my mind, you want to enable a specific level of autogrow that will be likely to handle something unexpected in a single, or maybe a couple growths.

The two options for autogrowth are:

  • percentage
  • fixed size growth

In general I think a fixed size growth is the setting to choose. This offers more control and as your database size grows, it’s unlikely you want to grow at a percentage. A 10% growth of a 100MB database is 10MB, almost a rounding error on many of today’s drives.

However a 10% growth on a 2TB database is 200GB, which could easily exceed the free space on the drives that make up a file,which is where you set the autogrow specifications:


As the file size increases, the disk space goes down, but the percentage growth goes up as well, exacerbating the problem.

So what are the guidelines?

I think that you have to look at the data growth and each database and what is possible for data growth in that database. Some some database that does ETL work I think it’s more likely you could have an unexpected large or duplicate import in a process that would cause larger growth.

However the question I was asked talked about setting a guideline for hundreds of databases, where in the short term it’s impractical to review every database. For those, I offer up these guidelines, though I’m happy to have someone give better guidance.

MDF Size File Autogrowth
< 1GB 100MB
1GB < mdf size < 50GB 2GB
50GB < mdf size < 200GB 5GB
200GB < mdf < 1TB 10GB
> 1TB 50GB


These are guesses on my part, based on some experience, but the largest database I ever managed was a 600GB one, so I’m guessing on the TB scale.

I’d welcome your comments and experience in this area.

Filed under: Blog Tagged: administration, sql server, syndicated

The Voice of the DBA

Steve Jones is the editor of SQLServerCentral.com and visits a wide variety of data related topics in his daily editorial. Steve has spent years working as a DBA and general purpose Windows administrator, primarily working with SQL Server since it was ported from Sybase in 1990. You can follow Steve on Twitter at twitter.com/way0utwest


Posted by Fabrizio Faleni on 3 December 2011

As file growth is I/O intensive and bad for overall performance, it is important that it neither won't happen to often nor will it be too "heavy" from an I/O point of view, as it is likely to happen during workhours.

My personal way of setting autogrow is that the space added must be enough for a whole day of activity. This is a sort of balance that I try to weight with my experience: some financial applications databases grow more near to month change, so i take the average daily growth value and multiply it by 1,5.

Posted by Steve Jones on 3 December 2011

Good guidelines for a specific application, and perhaps that's not a bad calculation to make in a script.

Posted by rooierus on 16 April 2012

I was told to always grow in 2^ increments ? Like always grow in 128,512,1024 increments ?

Leave a Comment

Please register or log in to leave a comment.