SQL Clone
SQLServerCentral is supported by Redgate
Log in  ::  Register  ::  Not logged in

Some Suggested SQL Server 2008 R2 Instance Configuration Settings

Depending on your anticipated workload and which SQL Server components you have installed, there are a number of instance level settings that you should consider changing from their default values. These include Max Server Memory, optimize for ad-hoc workloads,  default backup compression, max degree of parallelism, and the number of TempDB data files for the SQL Server instance.

Max Server Memory

Max Server Memory controls how much memory can be used by the SQL Server Buffer Pool, which is where cached data is held in memory after it is read in off of the disk subsystem. This is only for use by the relational Database Engine. It has no effect on other SQL Server components, such as SSIS, SSRS, or SSAS. It also does not affect other Database Engine components such as Full-Text Search, the CLR, the procedure cache, connection memory, etc.

The idea here is to limit how much memory the buffer pool can use in order to avoid starving other SQL Server components or the operating system for memory. The table in Listing 1 has some suggested starting values for a dedicated database server that is only running the Database Engine, with no other SQL Server components running. If you have other components installed, you should adjust the Max Server Memory setting downward. The setting is dynamic, meaning that you can change it and it will take effect immediately, with no restart of SQL Server required.

Optimize for ad-hoc workloads

Optimize for ad-hoc workloads is a new instance level setting that was added in SQL Server 2008 which is designed to help control the amount of memory that is used by single-use, ad-hoc query plans in the procedure cache. It allows SQL Server to only store a small stub of an ad-hoc query plan in the procedure cache the first time the ad-hoc plan is executed, which reduces the memory required by the plan in the procedure cache.

With SQL Server 2005, it was very common to see very large amounts of memory being used by single-use, ad-hoc query plans (often in the 6 to 8 GB range). Later builds of SQL Server have changes that reduced this problem somewhat, but it was still a big issue. Interestingly, one of the biggest offenders that generated ad-hoc query plans in SQL Server 2005 was SQL Server Agent! Another big offender was SharePoint.

In my opinion, you should always enable this setting on SQL Server 2008 and above. I really cannot think of a good reason not to do this.

Default backup compression

If you have a Version and Edition of SQL Server that supports native backup compression, I believe you should enable this setting as shown in Listing 7.1. This simply means that all SQL Server backups will use native backup compression by default. You can always over-ride this setting in your backup T-SQL command. SQL Server 2008 Enterprise Edition supports native backup compression, as does SQL Server 2008 R2 Standard Edition.

Max degree of parallelism

Max degree of parallelism is an instance level configuration setting that is meant to control whether and how much the Query Optimizer will attempt to spread a complex or expensive query across multiple processor cores to run in parallel. The default setting is zero, which allows SQL Server to parallelize queries across as many cores as it sees fit. This default setting is usually the best choice for DW and reporting workloads, but can often be problematic for OLTP workloads. With OLTP workloads, sometimes the Query Optimizer will choose a parallel query plan when a non-parallel query plan would have actually have been less expensive. If this happens a lot, it can cause a high percentage of CXPACKET waits for the instance.

In the past, many people would advise you to immediately change your max degree of parallelism setting to a value of 1 if you had an OLTP workload and you saw a high percentage of CXPACKET waits in your cumulative wait statistics for the instance. I think you should look a little deeper to try to find the actual source of the problem before you make that change. In my experience, it is quite common for “missing indexes” for important queries to cause the Query Optimizer to choose a parallel plan to try to compensate for the missing index. If SQL Server has to do a very large index or table scan, the Query Optimizer may think that it will be less expensive to parallelize the query.

If that is the case, it would probably be better to create the missing index to alleviate the source of the problem, instead of setting max degree of parallelism to 1 to treat the symptom. It is certainly worth some investigation. One big exception to this advice is if you are working with SharePoint databases (for either SharePoint 2007 or 2010). In that case, you should definitely set max degree of parallelism to 1 for the instance.

Number of TempDB data files

By default, SQL Server will create one small TempDB data file (and one log file) in the default location for TempDB on your SQL Server instance. If you ignored my advice about default file locations, the TempDB files will be in a sub-directory on the same drive where your SQL Server binary files are located. This is likely your C: drive on your database server, which is not a good location! If your TempDB is on your C: drive, you need to move it someplace better, which is hopefully a fast, dedicated logical drive.

You also need to create some additional TempDB data files, which should all be the same size. If all of the TempDB data files are the same size, SQL Server will use all of them equally. The reason to have multiple data files is to reduce possible allocation contention, as objects are created and destroyed in TempDB.

The old guidance from Microsoft was to create one TempDB data file per physical processor core. Now in the days of eight and twelve core processors, I think this is excessive. A general consensus seems to be forming that you should start out with perhaps four or eight TempDB data files (making them all the same size), and then look for signs of allocation contention before you consider creating any more TempDB data files.

Listing 1 shows some queries that let you check your current instance configuration settings, and some examples of how you might change them. These are example settings only; you need to use your own judgment for what values you choose.


-- Get configuration values for instance
SELECT name, value, value_in_use, [description]
FROM sys.configurations
ORDER BY name;

-- Set max server memory = 59500MB for the server
-- (example value only)
EXEC sp_configure 'max server memory (MB)', 59500;

-- Some suggested Max Server Memory settings
-- Physical RAM     Max Server Memory Setting
--    4GB            3200
--    6GB            4800
--    8GB            6600
--    16GB           14000
--    24GB           21600
--    32GB           29000
--    48GB           44500
--    64GB           59500
--    72GB           67000
--    96GB           90000

-- Enable optimize for ad-hoc workloads
-- (new in SQL Server 2008)
EXEC sp_configure 'optimize for ad hoc workloads', 1;

-- Enable backup compression by default
-- (new in SQL Server 2008 Enterprise Edition)
-- (added to SQL Server 2008 R2 Standard Edition
EXEC sp_configure 'backup compression default', 1;

-- Set MAXDOP = 1 for the server
-- Depends on workload and wait stats
EXEC sp_configure 'max degree of parallelism', 1;

Listing 1: SQL Server Instance Configuration Setting Queries


Posted by Jason Brimhall on 28 February 2011

Those are all very good recommendations

Posted by ALZDBA on 2 March 2011

Always nice to have a ref list with very good supporting info.

Thanks again for sharing.

Posted by jay-519545 on 2 March 2011

If you don't use compression and then run 7zip you end up with a much smaller file. If you use sql server compression you have a bigger file that cannot be compressed much further.

Posted by Glenn Berry on 2 March 2011

One of the main benefits of using backup compression is that the backup file is compressed before it is written out to disk. That reduces the amount of I/O required, and in most cases reduces the amount of time required to complete the backup. A restore of a compressed backup also is much faster. You would lose those benefits if you used 7Zip instead.

Posted by jay 31866 on 2 March 2011

Thanks, Glen. However the disk space differences are substantial and I rarely have to do a restore. For me, it is better to have it more compact so I can keep months of backups available just in case. I run the compression in off hours and this does not affect the database. So I guess it depends on what you need.

Another question, I use a solid state drive for TempDB. Is there still benefit in having multiple TempDBs, and if yes, how many?



Posted by Leonard.Williams on 2 March 2011

Hi Glenn Berry,

Do you know whether "Backup Compression" exist in SQL Server 2000. I'm trying to manage a backup and a database in the neighborhood of 350GB and the backups and restores take about 6 - 8 hours on our existing Hardware (disk controllers, speed of disks, etc.) The last restore from backup took 22 hours. Is there anyway of speeding up our backups and restores?

Posted by Glenn Berry on 2 March 2011

There is no native backup compression in SQL Server 2000 or SQL Server 2005. You can buy third party products, such as LightSpeed that would with older versions of SQL Server.

You can also try doing a striped backup, where the backup file is written to multiple files (that are on separate logical drives) that can make backups and restores go much faster.

Posted by jay 31866 on 3 March 2011

Glenn, thanks for the great article!

Posted by Robert on 6 April 2011


Regarding the max server memory for SQL 2008 R2, it looks like Standard Edition can utilize 64GB. Do you know if this is 64GB per server or per instance? For example, if I have a server with 96GB of memory and two SQL instances, could I set max server memory to 45GB on each instance or would I need to set it to 32GB on each instance?


Posted by Glenn Berry on 6 April 2011


That 64GB limit for SQL Server 2008 R2 Standard Edition is per instance. You could go 45GB per instance, or you might decide to favor one instance over the other, depnding on the relative workloads.

Posted by chandan_jha18 on 6 November 2012


thanks for this great article. A lot of times you mentioned the term 'different logical drive'.

Shouldn't we talk about different physical drive as logical drives may point to the same disk under the carpet and are usually just for better manageability but will cause contention.



Leave a Comment

Please register or log in to leave a comment.