SQL Clone
SQLServerCentral is supported by Redgate
Log in  ::  Register  ::  Not logged in

Should I have AutoGrowth enabled on my SQL Server Database

I was reading a few forum posts yesterday where a few people were complaining that the transaction log had grown during index maintenance and they were asking:

  1. If they should have autogrowth enabled?
  2. If they should restrict the size of the file?

I think autogrowth should be enabled on all databases but it should only be used as a fall back, or fail safe. The DBA should be sizing the database appropriately right up front and then monitoring the database size and space used to decide if and when to grow it again manually, and always increase size for reasonable medium term use.  I think autogrowth is fine as a fail safe in case you miss something but if you rely on it to size your database you will find it kicks in at the most in at appropriate times, which you have no control over, which can affect performance. When setting up autogrowth I like to set the files to grow by fixed amounts in megabytes, and not let the files grow by a percentage of their current size. This is especially important for the log because  as the size of the log increases the more megabytes it will grow by increases. 10 percent of 100 is not the same as 10 percent of 200 etc and the size of the growth of the log can effect the size of the Virtual Log Files (VLFs) inside the transaction log. The link takes to the blog of SQL Server MVP Kimberly Tripp (Blog). So to summarise I like to enable authgrowth as a fail safe, set the growth to be fixed amounts of megabytes and monitor database file sizes and set the file size to appropriate values  and growth manually when needed.

Regarding the restrict the size of a database file option. Then I generally leave that set to unrestricted because I monitor and have alerts setup on free disk space so I’m able to work on storage before it becomes an issue and I don’t want my database falling over because I set a limit on the size of the file. I’m sure other people have different views on this and I would be more than interested to know if people do things differently and why, in the comments below.


Posted by Tony Mulvihill on 19 December 2010

Interesting thoughts,

The problem with those thoughts are where there is no DBA to monitor and maintain the databases and this is often the case for a great many small/medium organisations.

Many enterprise applications have a SQL Server backend database and are placed into sites where there is no DBA nor infact is there any resonable level of I.T. support.

Often the I.T. support is contracted to a small/medium support provider (sometimes a single contractor) who themselves have no database or SQL Server experience.

The typical requrements for these sort of sites are of the set and forget type of configuration for which autogrow will be a part of the desired stardard functionality rather than just as a failsafe.

Anyone else have any similar or differening experiences with small/medium sites ?


Tony M

Tony M

Posted by Gethyn Ellis on 20 December 2010

Tony, I have to disagree, firstly if it is a "set and forget type of configuration" then when you set up the databases initially, set the size of the database for the next 6 - 12 months. Then when you come back in say 6 months, I assume even " set and forget " environments will need the occasional  health check, the support person check the database size again decide and see if needs to grow or even if it has auto-grown while you have been away. Then re-size accordingly. All the while leaving auto-grow turned as a fail-safe. This will help prevent performance issues down the road. So for example if your database is going to get to 20GB in size over say the year, then taking the defaults 1mb for the log with 10% auto growth and 3mb for the data file with auto-growth set to 1mb then you are going to have a lot of auto-grows over the course of the year. You'd be better setting it to 20GB right up front with auto-growth set to around 1024MB or 1GB

I would recommend that any business large or small the runs Enterprise application with SQL as the backend, then the data and hence SQL Server should be very important to them and they   hire a contractor who has at least some SQL skills.

Posted by Imran Ashraf-452633 on 22 December 2010

Personal preference is to try to size the files correctly upfront ensuring there is sufficient space. I have been called into situations where the files were allowed to grow and fill the drive. No naming but 1 company had a database set in full recovery mode but were not backing up the transaction logs. Lets just say there smallish DB had a logfile 30GB in size and filled the drive. Autogrow is good as the writer mentions as a failsafe but only when you size the files to a appropriate size for the unforeseen. Main point is monitor the DB growth and from that you will get an idea of how quickly space is being used. Growing the DB in small chunks is from my point a no no. Grow the DB in chunks so you don't hit fragmentation at disk level for the DB.

Posted by Paul Randal on 22 December 2010

Hey Gethyn - agree with your article 100%.

@Tony - I hear you too - the issue is with the vendor (including MS) who builds on top of SQL Server without properly understanding how it needs to be managed. I see this all the time.

@Gethyn - this is easier said than done if the company isn't *aware* that they have a SQL Server in there that needs to be managed, and how to manage it. Onus is on the vendors again. Hard problem to solve as vendors are very hard to get to change advice.

Posted by Carl on 22 December 2010

I actually agree with Tony to a large degree.  Take a vendor who pushes 100 or more clients in a year with SQL Server as the backend.  Then each client changes it's usage during the year.  There is no physical way to know how much it will change in usage for each individual client.  After a year or two you can go back and look and make adjustments, but that's an after-the-fact operation.  The volume of installs is simply prohibitive to active monitoring for any small-to-medium sized vendor (under 100 employees).  Active monitoring also brings you into the arena of support contracts and Intellectual Property rights vs data ownership rights.

I do agree that common sense should be used.  For example if you know for a projected volume the DB will be about XX GB in the first 6 months and would grow about XX MB per month after that you SHOULD size it and change the default auto sizing to approach a projected usage for the 6 months or 1 year projections.

Posted by Markus on 22 December 2010

We have 40+ production SQL Servers and hundreds of databases.  Yes, one should attempt to the best of their knowledge to size the db to what it is supposed to grow to. However, alot of software vendors can be wrong!  Over the years I have found that to be true.  Also, sometimes a table is a logging table and something in the application goes haywire and the table grows to millions and millions of rows when it is supposed to be only hundreds.  I always set autogrow on.  If not then a production system that effects hundreds and hundreds of users in a 24X7 app can be down just because the database is 100% full.  Personally I would rather the database autogrow then make one of our store managers be frustrated and call our Help Desk because he cannot do his job at 1AM in the morning and have to wait almost an hour for someone to add space to a database because it isn't set to autogrow.

Posted by a-brbeck on 22 December 2010

I'm almost inclined to think that the autogrow feature should have been removed from the product. Possibly it has it's place under very specific circumstances, but in general I just see it as Microsoft allowing (or even helping) the user (owners of a SQL Server) shoot themself in the foot. There are numerous reasons why allowing any auto-growth is a bad idea, many of them previously mentioned. I myself have seen numerous cases where a database was set to full recovery but they don't back up their transaction logs. Instead they create a job to truncate the log and shrink the files; Books OnLine specifically states that this should NEVER be done (use Simple Recovery instead). Setting up a database this way shows a lack of understanding of how the database works.

If auto-growth were removed from the product, DBAs would be more likely to be forced to understand what's really going on and how to properly deal with it.

Whether you buy a product or build your own, you should have daily monitoring of the database size in any medium or larger business database including trending to tell you how fast your database is growing and trend when you are likely to run out of space.

I would be tempted to allocate the whole drive from day one, so that running out of space means installing a new drive. In that case auto grow would fail just like manual growth would fail. (Ok, I might leave 5% of the drive free to allow for a slight bit of manual growth in an emergency.) Then daily monitor how fast the database file is filling up.

By pulling information from the DMVs or the underlying system tables you can find the internal size of the file and record it to a table at regular intervals. You can then setup a report in Reporting Services and have a report that you look at every morning to show how the database and log of expanded or contracted over the past 24 hours. This not only helps you project when you're going to have to get more hard drive space (or have hard drive space you're never going to need), it also helps you see what's happening in the database. For example, if the internal size of the transaction log grows unusually large one night, you know somebody did something that caused a lot of activity in that database; immediately, you know that something unusual happened and had a big impact on the system.

Posted by a-brbeck on 22 December 2010

Oh. I almost forgot to mention that there are some built in reports in SQL Server that help you monitor database size, although they don't show history. Monitoring those regularly and understanding the different Recovery Models in SQL Server should go a long way towards needing to grow the database files.

Posted by Markus on 22 December 2010

I still disagree. You can never take into account the growth rate of every single database.

My beef with Microsoft is having master, msdb, model, temp db with very small growth rates.  I typically resize the sizes and the growth and change it to MB not percentage growth rates.

Posted by marco on 22 December 2010

@a-brbeck: I agree with your view in only one and exactly one situation: when the server has only a single database and the company actually has a DBA knowledgable about the specific application this database is for.

Your model already fails when multiple databases have to reside on the physical disk(s) which is the norm in average companies. How much of the disk-space are you going to allocate to each db? Without the budget for a DBA with a crystal ball knowing what each db-size will be in 3-4 years time, this is an impossible task.

I fully agree with Paul Rendal. The onus is on the vendors to create a setup-script which will work in the majority of cases (customers) who have no clue about each customer's individual data-needs in the couple of years the application is going to run. Only very large organisations who can afford consultants who can spend time calculating/estimating expected data-needs for each individual instance of the application may choose to set it all up without auto-grow and a DBA monitoring the db-sizes on a day-by-day basis to foresee changes in data-use and take appropriate action (schedule a manual grow at a quiet time, install additional disks, etc).

But for the average small/medium companies the auto-grow feature of SQL-Server is a god-sent when installing several out-of-the-box softwares on their (single) server each with their own SQL-Backend. Imagine excel-files are implemented using SQL-Server databases (not completely outlandish) and it would require choosing the size up-front...

Posted by Jason Brimhall on 22 December 2010

I agree with you Gethyn.  This is the same methodology I use when setting up my database files.

Posted by Gethyn Ellis on 22 December 2010

Jason, Thanks for the comment, looks like it stirred quite a bit of debate and even got a comment from @PaulRandal.

Thanks to everyone for taking the time to leave comment, looks like vendor supplied databases are the biggest bug-bear for people.

Posted by Sandra Skaar on 29 December 2010

I use this setting with a maximum file size and pre-determined growth amount for the data and log files. Sizing up front for at least a year is best and I have alerts to warn when database has less than 3 growths remaining. I love love love SQL 2008 policy manager!

Leave a Comment

Please register or log in to leave a comment.