Previously we discussed about the pre-installation consideration, which can be found here
. In the second part of this series, we will talk about what we should consider after we got the server setup.
SQL server has been around for a long time, the default setting that comes with it are mostly outdated (as agreed by many DBAs), Let’s go through some of them and see why you should change them.
- Power option – One of the most forgotten setting of all, with some documents online, it suggests that it might have up to 40% increase just by changing this simple option to high performance. This should be on by default for all your SQL servers.
- Instant-file initialization – This is another must for all SQL servers, it’s simple to setup and it does helps whenever database data file needs to expands. You might argue that with proper capacity planning, your databases would never auto extend, however, it might still happen from time to time and this options do helps when it happens, and it helps to minimize the time for manual expansion of the data files as well. Do note that database log files cannot not enjoy this.
- Max memory setting – the default comes with the install was a ridiculous high value (2147483647) which effective say anything the server has. This is one of the most important setting that you should have change after a new installation. As for what value it should be, well, it depends! J I do tend to follow one great post from Jonathan Kehayias here (https://www.sqlskills.com/blogs/jonathan/how-much-memory-does-my-sql-server-actually-need/), which works pretty well so far. In summary, its reserve 1 GB of RAM for the OS, 1GB for each 4 GB of RAM installed from 4 – 16 GB systems, and then 1Gb every 8GB RAM installed above 16Gb RAM. A simple example will be if you have an 8Gb system, you should set your max memory to 5Gb (1 GB for OS, 2 Gb for 8 GB RAM installed), for a system that have 64Gb of RAM, max memory setting will be 53Gb (1GB for OS, 4 GB for first 16 GB RAM, 6GB for the rest above 16 GB). This value might vary from different system usage, but it might can give you a starting point of the setting and go from there.
- Backup Compression – If you are using SQL 2008 R2 or above, this is a no-brainer, I cannot see why you would not turn it on since it free even on the standard edition of SQL. This helps to reduce not just the backup file size, but also the backup and restore speed. It does however use a bit more CPU during backup, but I never see any system that cannot spare a few more CPU cycle for this and the benefit does out-weight the downside. With that said, if you have a 3rd party backup system in-place, you might can get a better compression ratio, which I have done a comparison with Redgate here.
- Dedicated administrator connection (DAC) – This option is disabled by default, which I think it should be enable for every system. What it does is allow admin connection (only one at a time) to connect to your SQL server when everything might fail. If your system was not responding to normal operation and it hangs for some heavily run batch, it might not answer any other connection from that point on. With DAC, it uses a special reserved scheduler which can open a connection just for you. By then you can check what is going on within SQL and might can get your system back other than restart the server.
- Antivirus to skip SQL related files – Microsoft does suggest to excludes SQL related files, which includes extension for mdf, ndf and ldf, many others suggests to excludes trc, bak, trn as well. However, some do worry about a virus writer start to explore that and hidden their code in bak and others. I do think that excluding mdf, ndf and ldf should be fine as if the page is different, SQL server will report and error while its accessing it (since it might be changed). But do check with your security department to follow the company standard.
- SQL Error log setting – The default is 6, which means only keep 6 historical error log files. Since it’s only a text file and normally it does not consume much space, I do tend to set it to the max (99) and have a job to recycle them weekly. This means I can keep my log files going for around than 1.9 years. Keep in mind that every restart of SQL service will recycle the log file, but with that setting, most likely you should be able to have your log for a year.
- Standard Maintenance jobs – a good database standard maintenance job helps you to keep your database going, here is one good place for a complete solution for your SQL server maintenance solution (https://ola.hallengren.com/), with that scripts, it will helps you to perform database backup, integrity check as well as index maintenance.
- Optimize for ad hoc workloads – When a batch execute, it will create an execution plan which stored and ready for re-use, however, many of which are only used once and never got used again. Those plan will only take up memory and cache in your system. Having this option turn on, it will clear this out and leave more valuable resources for other processes.
- Tempdb setup – to avoid tempdb contention, it is recommended that to have multiple tempdb data files, for a system less than 8 logical cores, have your number of tempdb data files matching the logical cores, for system more than 8 cores, start with 8 tempdb data files and check for contention and add more files in the set of four (12, 16 and so on) until you no longer have contention.
- Monitoring of performance
- Blocking – you can set “blocked process threshold (s)” to be 10 (sec) and have a service-side trace or extended events to capture the info, this will give you any blocking process over 10 sec. The value will change based on the workload, it should be always greater than 5 and in an increment of 5 sec. Useful to troubleshoot of long running queries or timeout related issues.
- Long running queries – server-side trace or extended events to capture long running queries that runs over a given time. I will start from 25 seconds and work my way down, as the default query timeout for .net is 30 seconds, where all queries should be lower than that.
- Logins Logouts – This is more optional, some company might need to keep track of who did login and logout to a given server.
- Trace flag 3226 – Every successful backup writes an entry to SQL error log, if you do frequent backup or large number of databases in an instance, you might end up with a large error log file that is full of those information. If there are no checking depends on the successful backup entry in error log, you can simply suppress successful backup event using this trace flag. Usage details can be found here.
- Trace flag 1222 – Detecting deadlock information is helpful, it also good to know if your server does encounter deadlock and what process involves in the deadlock event. The overhead of the trace flag is minimal and should have this enabled.
- Disable sa account – From a good security practice, no one should be using the sa account, but rather named admin account with sysadmin rights.
- Parallelism – there are 2 setting related to this, more details can be find here.
- max degree of parallelism - as general guideline, set the max degree of parallelism to be the number of cores for a given CPU, i.e. if you have a 2 x 8 cores system, set it to 8.
- cost threshold for parallelism – a default of 5 is way too low. It would be good to start from 50 and check the performance of your critical queries.
Database Level Setting:
- DB file location – Database file should be separated physically, ie. Data, log, backup and tempdb should be all separated, this has been mentioned in the pre-installation planning. You should ensure that all user databases should follow that setup.
- Auto File growth – The default setting is 1 MB for data and 10% for log files, which is very small even if you have a tiny database. You will need to estimate the size of the database for a reasonable initial size, and set the auto-growth setting to be a set interval by MB rather than percentage. Normally I will go by either 512 Mb or per 1 GB for data file and 256 Mb for log files, do keep in mind that for data files, SQL server can utilize instant-file initialization, however, for log files it cannot. So setting a large size for growth in log might have performance impact when it growth.
- Database options – you should know that auto close and auto shrink should be disable at all times (unless you got a really good reason for it), this is disabled by default, so just need to ensure they resume disable.
This seems to be a long list, but it’s only a starting point of your standard build document since you might have other custom setting that fits your environment or application. I would strongly suggest you review the above and check whether if you should have them setup in your existing environment or update your existing standard build document. Would love to hear what other options you have set as default and the reason behind them, please do share them in the comment session below.