SQLServerCentral Article

Performance Tuning Using Free Tools, Part 2

,

Does the cost of running SQL Server often surprise you? The cost may come from buying on-premises hardware or paying Azure SQL bills.

You can keep the cost under control by improving the efficiency of your databases. You can improve your databases using SQLFacts, a FREE suite of tools for SQL Server database engineers. Please refer to an earlier article for a general description of the SQLFacts toolkit.

SQLFacts provides many tools for performance tuning. The tools allow you to quickly find ways to increase efficiency in your databases. The previous article focused on the index-related performance tuning tools in SQLFacts.

This article will focus on the query-related performance tuning tools in SQLFacts. Be sure to see the README file in the SQLFacts toolkit download for more information about all the tools.

NOTE: Most of the tools described in this article do not run with Azure SQL Database. The Azure SQL Database service does not provide access to the SQL Server instance hosting the database. Most of the tools described here depend on information from the instance as the basis for their analysis. The SQLFacts tool and the QueryHistory tool are able to run with Azure SQL Database, but none of the others. This note does not apply to SQL Server on Azure Virtual Machines or Azure SQL Managed Instance.

QueryHistory

The QueryHistory tool returns information for SQL statements (or stored procedures) executed recently. The performance statistics it uses are retained by SQL Server until the instance is restarted, or until the execution plans are evicted from the cache. The historical information can be very beneficial for finding ways to improve performance. It allows you to quickly focus your attention on the most expensive SQL statements (or stored procedures).

When evaluating how expensive a routine is, it's not just about how long the code runs. The expense is also about how often the code is executed. Accordingly, the Seconds_SUM column (which represents the total time for all executions combined) is used for filtering and ordering. The number of reads/writes is also an important factor in evaluating performance.

There's a variable near the start of the SQL code to specify a minimum total run time (or CPU time), in seconds, for including a query or object in the list. It filters on the Seconds_SUM column. It's better to start with a very large value for the variable and reduce it until you get rows. If you start with a small value then the tool may run for a while and return an unwieldy number of rows.

There's a variable near the start of the SQL code to quickly switch between run time and CPU time for the Seconds_XXX columns.

  • Result set 1 is for individual SQL statements.
  • Result set 2 is for stored procedures.

Sessions

The Sessions tool returns information for all user connections (sessions) to the SQL Server instance. The SQL code can be quickly adjusted to serve the current need (see the WHERE clause on line 124). For example, you can include idle sessions or you can exclude everything other than lead blockers.

The information includes exactly what SQL code is running in the moment (SQL_code), who's running it (login_name), where it came from (host_name), how it was executed (program_name), when it started (batch_time), blocker (blocking_id), how much RAM is being used (GBs_RAM), transaction state (trans_state), and more.

There's a disabled result set which summarizes the waits observed in the moment. You can simply enable the result set if you are studying which waits are/were the most common.

There's a disabled result set which summarizes the waits by database.

  • A high amount of waiting of type PAGELATCH_SH for tempdb may indicate a need to increase the number of tempdb data files.
  • A high amount of waiting of type PAGELATCH_UP for tempdb may indicate a need to increase the number of tempdb data files.
  •  A high amount of waiting of type PAGELATCH_EX for tempdb may indicate a need to decrease the usage of temporary tables or use the memory-optimized tempdb metadata feature of SQL Server 2019.
  • A high amount of waiting of type PAGELATCH_EX for user databases may indicate a need to reconsider the clustered index keys or use the OPTIMIZE_FOR_SEQUENTIAL_KEY feature of SQL Server 2019.

There's an additional stand-alone query which returns information about user sessions/requests that are occupying tempdb space.

  • The GBs_session_user column is for space used directly by the SQL code for previously finished tasks.
  • The GBs_session_auto column is for space used indirectly by the system for previously finished tasks.
  • The GBs_request_user column is for space used directly by the SQL code for currently running tasks.
  • The GBs_request_auto column is for space used indirectly by the system for currently running tasks.

There's an additional stand-alone query which returns information about user sessions/requests that are occupying transaction log space.

  • The GBs_tran_log_user column is for transaction log space used directly by the SQL code.
  • The GBs_tran_log_auto column is for transaction log space used indirectly by the system.

Resources

The Resources tool returns information for processes that are consuming a large amount of memory and/or a large amount of tempdb space. It watches for such processes over a specified period of time. It's essentially an on-demand monitoring tool that looks for those conditions. You might run this tool during a time when those resources appear to be under pressure. It will show you which processes are causing the issue(s).

There are variables near the start of the SQL code to specify a date/time to begin collecting data, how many times to check for offending processes, and how long to wait between checks. The default values cause checking to begin immediately, doing 120 checks, with 30 seconds between checks. In other words, it monitors for offending processes for one hour.

There's a variable near the start of the SQL code to specify a threshold percentage of total amount (size) for memory. Any processes meeting or exceeding the threshold will be included in the results.

There's a variable near the start of the SQL code to specify a threshold percentage of total amount (size) for tempdb. Any processes meeting or exceeding the threshold will be included in the results.

  • Result set 1 is for excessive memory usage.
  • Result set 2 is for excessive tempdb usage.

Blocking

The Blocking tool returns information for processes that have been blocking other processes for an excessive amount of time. It watches for such processes over a specified period of time. It's essentially an on-demand monitoring tool that looks for blocking. You might run this tool during a time when the Sessions tool repeatedly shows a lot of blocking happening. It will show you which processes are causing the issue(s).

There are variables near the start of the SQL code to specify a date/time to begin collecting data, how many times to check for offending processes, and how long to wait between checks. The default values cause checking to begin immediately, doing 120 checks, with 30 seconds between checks. In other words, it monitors for offending processes for one hour.

There's a variable near the start of the SQL code to specify a threshold threshold number of seconds being blocked. Any processes blocked for this length of time, or longer, will be included in the analysis.

The result set is a list of blocking processes with a summary of the amount of blocking each one caused during the specified period of time.

Locksmith

The Locksmith tool returns information for locks, granted and/or waiting, in the SQL Server instance. The SQL code can be quickly adjusted to serve the current need (see the WHERE clause on line 168). For example, you can include idle sessions or you can exclude everything other than lead blockers.

The information includes who's running the SQL code (login_name), where it came from (host_name), how it was executed (program_name), when it started (batch_time), and what database resources are involved in the locks.

The information is very beneficial for determining the reason(s) for blocking caused by lock contention.

Deadlocks

The Deadlocks tool returns information about recent deadlocks. The default behavior is to examine the "system_health" extended event session, but it supports a custom extended event session as well. The SQL file includes a disabled block of code for creating a custom extended event session to capture the "xml_deadlock_report" event.

NOTE: This tool requires SQL Server 2012 or newer.

There's a variable near the start of the SQL code to specify a maximum number of minutes to look back for deadlocks that have occurred. The default is one day. The amount of history available is dependent on several variables within your system.

The routine returns three result sets. In each case the rows occur in pairs, two rows for each deadlock. The rows for each deadlock are for the processes involved in the deadlock.

  • Result set 1 is for general information and identifying which process was the deadlock victim.
  • Result set 2 is for the SQL code (often stored procedures) involved in the deadlock.
  • Result set 3 is for the database resources involved in the deadlock.

There's a disabled SELECT statement which combines all three result sets. It may be a bit easier to use if there are many deadlocks to research.

SQLFacts

The SQLFacts tool is for conducting research on a SQL Server database. The tool returns a large number of result sets, with four of them being very appropriate in this context:

  • Fact 44 Questionable Routines
  • Fact 45 Questionable Data Types
  • Fact 47 Questionable Parameters
  • Fact 48 Questionable References

Facts 44, 45, and 47 are more about the quality and accuracy of SQL code than the performance, but the issues they point out can also affect performance.

Fact 44 (Questionable Routines) is a list of routines whose SQL code uses undesirable features or syntax.

Fact 45 (Questionable Data Types) is a list of table/routine columns with undesirable data types.

Fact 47 (Questionable Parameters) is a list of parameters with a data type different from a column of the same name.

Fact 48 (Questionable References) is a list of references between routines that may result in lower performance. For example, it can be a problem to have views/functions referencing other views/functions referencing other views/functions.

Summary

Many things can affect how efficiently your SQL Server databases are operating, which then affects your cost. You can improve database efficiency by doing index tuning and query tuning. SQLFacts is a free toolkit with many tools for finding index issues and query issues. Check it out, and enjoy the many features!

Rate

4.5 (4)

You rated this post out of 5. Change rating

Share

Share

Rate

4.5 (4)

You rated this post out of 5. Change rating