http://www.sqlservercentral.com/blogs/glennberry/2010/12/28/sql-server-workload-types-and-hardware-selection/

Printed 2014/11/22 10:04AM

SQL Server Workload Types and Hardware Selection

By GlennBerry, 2010/12/28

There are two primary relational workload types that SQL Server commonly has to deal with, the first being Online Transaction Processing (OLTP) and the second being Decision Support (DSS)/Data Warehouse (DW). OLTP workloads are characterized by numerous, short transactions, where the data is much more volatile than in a DSS/DW workload. These differing workload characteristics will affect your hardware selection, sizing and configuration choices. They will also affect your database design and indexing decisions, as well as your maintenance strategy. You really should try to determine what type of workload your server will be supporting as soon as possible in the system design process. You should also strive to segregate OLTP and DSS/DW workloads onto different servers and I/O subsystems whenever possible. Of course, not all workloads are pure OLTP or pure DSS/DW. Instead, you may have a mixed workload that has some characteristics of both types of workloads.

OLTP Workloads

There is usually much higher write activity in an OLTP workload than in a DSS workload. It is not that unusual to have more writes than reads in a heavy OLTP system. This places more write pressure on your I/O subsystem, particularly on the logical drive where your transaction log is located, since every write must go to the transaction log first. The logical drive(s) that house your data files will also see a lot of activity in an OLTP system, for both reads and writes. A heavy OLTP system will require more frequent transaction log backups than a DW/DSS system, which places further demands on your I/O subsystem. Using Backup Compression can reduce the I/O cost and duration of SQL Server backups at the cost of some additional CPU pressure.

If you are using High Availability/Disaster Recovery (HA/DR) features like Log Shipping or Database Mirroring, you will see additional read activity against your transaction log, since the transaction log must be read before the activity can be sent to the Log Shipping destination(s) or to the database mirror. Using Transactional Replication will also cause more read activity against your transaction log on your Publisher database.

Since the data can be so volatile in an OLTP system, you will most likely have to do more frequent index maintenance, which includes both index reorganizations and index rebuilds. Index maintenance places heavy demands on a database server from several different perspectives. There is read I/O pressure as the index is read off of the I/O subsystem, which then causes memory pressure as the index data goes into the SQL Server Buffer Pool. There is CPU pressure as the index is reorganized or rebuilt and then write I/O pressure as the index is written back out to the I/O subsystem.

Using SQL Server Data Compression in SQL Server 2008 and above can reduce the I/O and memory pressure at the cost of additional CPU pressure. Luckily, high performance CPUs are much more affordable than additional I/O capacity. That is one reason that I argue so heavily for always getting the best CPU available for a given server model. Keep in mind that Data Compression is only available in SQL Server 2008 (and above) Enterprise Edition.

Most OLTP systems generate far more input/output operations per second (IOPS) than an equivalent sized DSS system. This means that you will want to have more total drive spindles available so that your total IOPS capacity will be higher. OLTP systems often use a Storage Area Network (SAN) for their I/O subsystem because it is relatively easy (but expensive) to configure a SAN to support dozens to hundreds of disk spindles for a single database server. This can give you a very high IOPS capacity with high numbers of traditional magnetic disk drives. The general rule is that you will get roughly 100 IOPS from a single 10,000rpm magnetic drive and about 150 IOPS from a single 15,000rpm drive.

If you don’t have the budget or in-house expertise for a large SAN, it is still possible to get very good IOPS performance with other storage techniques such as using multiple direct attached storage (DAS) enclosures with multiple RAID controllers along with multiple SQL Server file groups and data files which allows you to spread the workload among multiple logical drives that each represent a dedicated DAS enclosure. You can also use Solid State Drives (SSD) or Fusion-IO cards to get very high IOPS performance without using a SAN.

DSS/DW Workloads

A DSS or DW system usually has longer running queries than a similar size OLTP system. The data in a DSS system is usually more static, with much higher read activity than write activity. For a DSS type of workload, I/O throughput is usually more important than IOPS performance. This is especially important when a DSS/DW system is being loaded with data, and when certain types of complex, long-running queries are executed.

Because of this, many new DSS/DW systems use DAS instead of a SAN for their I/O subsystem. If you have enough expansion slots (for RAID controllers) you can build an I/O subsystem using DAS that uses multiple external DAS arrays to achieve very high throughput performance at a relative low cost. High throughput performance is extremely important for data load performance.

Since the data in a DSS/DW system is relatively static, it is a much easier decision to use SQL Server 2008 Data Compression to reduce your overall I/O throughput and memory requirements (at the cost of some additional CPU pressure). Since the data in a DSS/DW is more static, you will probably not have to do nearly as much index maintenance as an OLTP system (even though you probably will have more indexes on each table).

Current AMD processors (such as the 12-core Opteron 6174 “Magny Cours” seem to perform quite well for DSS/DW workloads. If you use a modern Intel processor (such as a Xeon X5680 or Xeon X7560) you should strongly consider disabling hyper-threading, since long running queries do not perform as well on a hyper-threaded cores in a processor.

Characterizing Workloads

One DMV query you can run on an existing system to help characterize the I/O workload for the current database is shown below. This query will show the read/write percentage by file for the current database, both in the number of reads and writes and in the number of bytes read and written.

-- I/O Statistics by file for the current database
SELECT DB_NAME(DB_ID()) AS [Database Name],[file_id],
num_of_reads, num_of_writes,
num_of_bytes_read, num_of_bytes_written,
CAST(100. * num_of_reads/(num_of_reads + num_of_writes)
AS DECIMAL(10,1)) AS [# Reads Pct],
CAST(100. * num_of_writes/(num_of_reads + num_of_writes)
AS DECIMAL(10,1)) AS [# Write Pct],
CAST(100. * num_of_bytes_read/(num_of_bytes_read + num_of_bytes_written)
AS DECIMAL(10,1)) AS [Read Bytes Pct],
CAST(100. * num_of_bytes_written/(num_of_bytes_read + num_of_bytes_written)
AS DECIMAL(10,1)) AS [Written Bytes Pct]
FROM sys.dm_io_virtual_file_stats(DB_ID(), NULL);

Two more DMV queries you can use to help characterize your workload on an existing system from a read/write perspective (for cached stored procedures) are shown below. These queries can help give you a better idea of the total read and write activity, the execution count, along with the total and average elapsed time for those stored procedures.

-- Top Cached SPs By Total Logical Writes (SQL 2008)
SELECT p.name AS [SP Name], qs.total_logical_writes AS [TotalLogicalWrites],
qs.total_logical_reads AS [TotalLogicalReads], qs.execution_count,
qs.total_elapsed_time,
qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time],
qs.cached_time
FROM sys.procedures AS p
INNER JOIN sys.dm_exec_procedure_stats AS qs
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
AND qs.total_logical_writes > 0
ORDER BY qs.total_logical_writes DESC;

-- Top Cached SPs By Total Logical Reads (SQL 2008)
SELECT p.name AS [SP Name], qs.total_logical_reads AS [TotalLogicalReads],
qs.total_logical_writes AS [TotalLogicalWrites], qs.execution_count,
qs.total_elapsed_time,
qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time],
qs.cached_time
FROM sys.procedures AS p
INNER JOIN sys.dm_exec_procedure_stats AS qs
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
AND qs.total_logical_reads > 0
ORDER BY qs.total_logical_reads DESC;

 

I hope that you have gotten some useful information from this post, and I welcome any feedback!



Copyright © 2002-2014 Simple Talk Publishing. All Rights Reserved. Privacy Policy. Terms of Use. Report Abuse.