Suffering SQL Server Performance Issues?: A story of Olympic sprints and handy SQL Server hints!!!
Usain Bolt, the world’s fastest runner, now boasts his third successive 100m Olympic title.
What makes him so successful? Clearly he’s talented and determined.
But there are some key factors that any athelete would follow:
- Discuss a plan to develop skills and strategy, with their coach, which would be put into action as they train/practice over and over.
- Eat/sleep well to recover and keep high energy levels. The quality and amount of sleep plays a major role in their performance.
- Make tweaks as recommended by the coach, who’s responsible for carrying out performance analysis.
- Have a positive attitude. Just visualising the run/event would help build a winning mentality and achieve success.
SQL Server is no different. It:
- Computes the query plan and stores it in the the cache allowing further executions to use the same plan again and again.
- Maintenance tasks such as repair of defragmented indexes and stale statistics improves SQL Server’s performance.
- DBAs tweak SQL Server configurations and indexes based on their monitoring and performance analysis to bring out the best of your SQL Server environment.
- For three years in a row Gartner have positioned Microsoft SQL Server as the leader in the magic quadrant of operational database management systems. With SQL Server 2016 and its new features, Microsoft are determined not to lose this position.
We have a winner in our hands!
But it only takes a fraction of a second to split the winner from the rest. And to get every iota of performance out of SQL Server, the coach, the DBA (which is you!) can follow these tips to make leaps and bounds towards awesomity!
1) Review your index usage: Make sure you review the missing/redundant indexes as well as overlapping/duplicate indexes which would cause unnecessary resource usage, thus preventing the best efficiency.
2) Make sure that your IO doesn’t s*ck: Whilst PAGEIOLATCH is usually associated with disk IO throughput issues, it can also be caused by excessive TSQL query parallelism. This, however, is usually seen on systems with many logical cores where the max degree of parallelism is set to the default value of 0 (unlimited across all available cores). The disks (storage) can be overwhelmed by too much concurrency. Use Windows Perfmon to determine your read/write latency.
3) Check tempdb is configured properly – default may not be the best option: The tempdb is not just used for the temporary tables you create but SQL Server engine uses it for sorting/joining data, read committed snapshot isolation, availability groups, rebuild indexes in temp, etc. It’s important that the tempdb is configured properly. Prior to SQL Server 2016, by default you only start with 1 data file. But a good rule of thumb is:
- if you have less than 8 logical processors (with hyperthreading enabled), then use the same number of equally spaced data files as number of logical processors;
- If you have more than 8 logical processors then use 8.
Baseline and then check the performance upon change by monitoring Transactions/sec counter against tempdb database on Windows Perfmon. Trace flags 1117 and 1118 could also help to get further performance benefits out of your tempdb.
4) Avoid zero-initialisation of data files: Another default setting of 1MB or 10% file autogrowth is one to change. A recommended approach to setting autogrowth would be as follows:
Datafiles for user databases
• Primary should be 20mb, auto-growth 10%, size unlimited (it won’t be used for user data)
• Create secondary file group and make it the default file group (size unlimited) for the database with either:
– 1 * 20mb file (increment by 10mb) for very small database <= 1GB
– 1 * 100mb file (increment by 50mb) for small database <= 10GB
– 2 * 500mb files (increment by 250mb) for medium database <= 100GB
– 4 * 1GB files (increment by 500mb) for large database <= 500GB
– 6 * 2GB files (increment by 500mb) for extra large database <= 1000GB
– 8 * 3GB files (increment by 500mb) for super large database <= 1500GB
Transaction logs for user databases
• Create transaction log file with either:
– 1 * 50mb with 50mb (4 VLFs) increment for small database OR database in SIMPLE recovery mode, size unlimited
– 1 * 500mb with 500mb (8 VLFs) increment for everything else, size unlimited
When the data file grows SQL Server fills the newly allocated part of the file with zeroes, which is quite unnecessary. To prevent this part of the operation, you need to enable instant file initialisation i.e. grant the SQL Server service account “Perform Volume Maintenance tasks” privilege. This would mean that the newly created space can be used immediately. This only applies to the data file and not the transaction log file, which has to be zeroed out.
5) Schedule maintenance routines for optimal performance: Defragmenting indexes and updating statistics on a regular basis is vital in enabling SQL Server to make accurate decisions on the execution plans to use.