SQLServerCentral Article

SQL Server Performance Benchmarking with Patterns

,

Disk and CPU related performance counters and DMV information can be difficult to interpret and compare when activity levels differ massively. This Cellular Automation (CA) application places a precise workload that can be repeated, adjusted and measured. It includes various 'Pattern Generators', one that uses Geometry and places a CPU load, another that uses set based queries that are IO bound and another 'Hekaton' version that uses in-memory tables and compiled stored procedures. All three are implementations of 'Conway's Game of Life', and a precise way to benchmark and compare the relative performance of different SQL Servers. Data is inserted at the start of each benchmark then automated using a fixed set of rules, buffer pool pressure doesn't affect the benchmarks too heavily and typically, the same query plans are used regardless of SQL Server version or platform.

Benchmarking Process Overview

The following link explains the rules implemented by this application: http://en.wikipedia.org/wiki/Conway's_Game_of_Life . It is a Cellular Automation application with a virtual grid of z , y coordinates pre-populated with different patterns, the rules below are then applied and repeated. A cell represents a coordinate or a Merkle.

  • Any live cell with fewer than two live neighbours dies, as if caused by under-population.
  • Any live cell with two or three live neighbours lives on to the next generation.
  • Any live cell with more than three live neighbours dies, as if by overcrowding.
  • Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

The benchmarking procedure creates one of three initial patterns which are then 'automated' according to the rules above, the table below shows all parameters available:

Parameter NameTypeDescriptionDefault
@BatchesINTNumber of calls to the benchmarking procedures0
@CPU_BenchmarkBITExecute the CPU bound benchmark procedure0
@IO_BenchmarkBITExecute the SQL Server IO bound benchmark procedure0
@Hek_BenchmarkBITExecute the Hekaton in-memory, compiled benchmark procedure0
@NewPatternsInBatchINTNumber of new patterns (automations) generated by each call0
@DisplayPatternsBITShow final result pattern0
@InitializeBITClear previous results for session before execution1
@StressLevelTINYINT1 = small oscillating initial pattern, 2 = intermediate oscillating initial pattern, 3 = Large and expanding initial pattern1
@Description1VARCHARFor later analysis
@Description2VARCHARFor later analysis
@Description3VARCHARFor later analysis

Setup and Execution

Run the attached setup scripts in SQLCMD mode to create the CA Benchmark database, then call the CA_Benchmark procedure, set the 'Database Name', 'Data File' and 'Log File' variables before execution.

CPU Benchmarking Simulation - Geometry 

With the 'CPU benchmark' parameter, the coordinates are converted to geometric polygons, cell sizes are increased by 20% and the geometric 'Intersect' method is used to count adjacent Merkles. This cell expansion is illustrated by the 'Toad' pattern (see wiki) shown below in its base state/starting pattern.

When the automation rules are applied to the pattern above, the pattern below is generated on the 1st iteration, it returns to it's base state/starting pattern on the next iteration, this simple Stress Level 1 pattern changes between 2 states.

To simulate this example, execute the T-SQL below against the CA database (top left pattern).

EXECUTE dbo.CA_Benchmark @IO_Benchmark = 1, @DisplayPatterns = 1 ,@StressLevel = 2, @Batches = 1, @NewPatternsInBatch = 1;
EXECUTE dbo.CA_Benchmark @IO_Benchmark = 1, @DisplayPatterns = 1 ,@StressLevel = 2, @Batches = 1, @NewPatternsInBatch = 2;

OLAP Bias Workload Simulation

The screenshot below shows the initial patterns created at Stress Level 3.

The screen shot below shows the patterns after 60 automations, 2 new 'Gosper Glider Guns' patterns (see Wiki) are created every 30th automation which increases the CPU, IO and memory cost. The more new patterns are generated, the more like an OLAP workload the automation benchmark becomes.

To simulate this example, execute the T-SQL below against the CA database.

EXECUTE dbo.CA_Benchmark @IO_Benchmark = 1, @DisplayPatterns = 1 ,@StressLevel = 3, @Batches = 1, @NewPatternsInBatch = 1;
EXECUTE dbo.CA_Benchmark @IO_Benchmark = 1, @DisplayPatterns = 1 ,@StressLevel = 3, @Batches = 1, @NewPatternsInBatch = 60;

Use Case Example - 'Max Degree of Parallelism' Settings 

The CA benchmarking process was invoked using OSTRESS with the parameters below, MAXDOP was set to 0 for the first two benchmarks then to 1 for the next two.

Concurrent Requests@Batches@IO_Benchmark@NewPatternsInBatch@StressLevel@Description1@Description2
21000111OLTPMAXDOP=0
21011003OLAPMAXDOP=0
21000111OLTPMAXDOP=1
21011003OLAPMAXDOP=1

OLTP Bias Workload Results

When processing 1,000 small batches, the benchmark typically completed in half the time with MAXDOP set to 0.

BenchmarkMin. 1st QuMedian   Mean3rd QuMax
OLTP - MAXDOP=00.004.007.008.6710.00177.00
OLTP - MAXDOP=10.007.0013.0017.8720.00274.00

OLAP Bias Workload Results

When processing 10 larger expensive batches, the benchmark typically completed in half the time with MAXDOP set to 1.

BenchmarkMin. 1st QuMedian   Mean3rd QuMax
OLAP - MAXDOP=042,99748,34849,66450,49751,43059,897
OLAP - MAXDOP=119,58022,02023,4882386924,64836,147

These graphs and statistics were produced using the R script below:

rm(list=ls())
ca <- read.csv(file = "c:/DataScience/CA_MAXDOP_20170930.csv", header = TRUE, sep = ",", stringsAsFactors = FALSE)
# Factors
ca$StartTime <- as.factor(ca$StartTime)
ca$AutomationEngine <- as.factor(ca$BenchmarkPerspective)
ca$OLTP_OLAP <- as.factor(ca$Description1)
ca$MAXDOP <- as.factor(ca$Description2)
#install.packages("ggplot2")
library("ggplot2")
fill <- "#4271AE"
line <- "#1F3552"
ca_OLTP <- subset(ca, ca$OLTP_OLAP == "OLTP")
ca_OLAP <- subset(ca, ca$OLTP_OLAP == "OLAP")
p1 <- ggplot(ca_OLTP, aes(x = AutomationEngine, y = BatchDurationMS)) + geom_boxplot()
p1 <- p1 + scale_x_discrete(name = "Automation Engine") +
  scale_y_continuous(name = "Mean Benchmark Duration Milliseconds") + 
  geom_boxplot(fill = fill, colour = line)  
p1 <- p1 +  ggtitle("Boxplot of OLTP Duration by MAXDOP setting")
p1 <- p1 + facet_wrap(~MAXDOP)
p1
p1 <- ggplot(ca_OLAP, aes(x = AutomationEngine, y = BatchDurationMS)) + geom_boxplot()
p1 <- p1 + scale_x_discrete(name = "Automation Engine") +
  scale_y_continuous(name = "Mean Benchmark Duration Milliseconds") + 
  geom_boxplot(fill = fill, colour = line)  
p1 <- p1 +  ggtitle("Boxplot of OLAP Duration by MAXDOP setting")
p1 <- p1 + facet_wrap(~MAXDOP)
p1
summary(subset(ca_OLTP$BatchDurationMS,ca_OLTP$MAXDOP== "MAXDOP=0"))
summary(subset(ca_OLTP$BatchDurationMS,ca_OLTP$MAXDOP == "MAXDOP=1"))
summary(subset(ca_OLAP$BatchDurationMS,ca_OLAP$MAXDOP== "MAXDOP=0"))
summary(subset(ca_OLAP$BatchDurationMS,ca_OLAP$MAXDOP == "MAXDOP=1"))

Use Case - Measuring the cost of an Extended Event session. 

The CA benchmarking process was invoked using OSTRESS with the parameters below, the extended event session (see references) ON for the first two benchmarks then OFF for the next two.

Concurrent Requests@Batches@IO_Benchmark@NewPatternsInBatch@StressLevel@Description1@Description2
21000111OLTPXE OFF
21011003OLAPXE OFF
21000111OLTPXE ON
21011003OLAPXE ON

OLTP Bias Workload Results

When processing 1,000 small batches with the XE Session running, the mean benchmark times roughly double.

BenchmarkMin. 1st QuMedian   Mean3rd QuMax
OLTP - XE ON3.0013.0017.0023.9324.00243.00
OLTP - XE OFF0.004.007.0010.0013.00247.00

OLAP Bias Workload Results

When processing 10 larger expensive batches with the XE Session running, there was far greater variability in the benchmark times.

BenchmarkMin. 1st QuMedian   Mean3rd QuMax
OLAP - XE ON17,94620,61223,33727,07832,49646,286
OLAP - XE OFF16,54020,43322,65222,77324,25230,703

These graphs and statistics were produced using the R script below:

rm(list=ls())
ca <- read.csv(file = "c:/DataScience/CA_XE_20170930.csv", header = TRUE, sep = ",", stringsAsFactors = FALSE)
# Factors
ca$StartTime <- as.factor(ca$StartTime)
ca$AutomationEngine <- as.factor(ca$BenchmarkPerspective)
ca$OLTP_OLAP <- as.factor(ca$Description1)
ca$XE <- as.factor(ca$Description2)
#install.packages("ggplot2")
library("ggplot2")
fill <- "#4271AE"
line <- "#1F3552"
ca_OLTP <- subset(ca, ca$OLTP_OLAP == "OLTP")
ca_OLAP <- subset(ca, ca$OLTP_OLAP == "OLAP")
p1 <- ggplot(ca_OLTP, aes(x = AutomationEngine, y = BatchDurationMS)) + geom_boxplot()
p1 <- p1 + scale_x_discrete(name = "Automation Engine") +
  scale_y_continuous(name = "Mean Benchmark Duration Milliseconds") + 
  geom_boxplot(fill = fill, colour = line)  
p1 <- p1 +  ggtitle("Boxplot of OLTP Duration with Extended Event")
p1 <- p1 + facet_wrap(~XE)
p1
p1 <- ggplot(ca_OLAP, aes(x = AutomationEngine, y = BatchDurationMS)) + geom_boxplot()
p1 <- p1 + scale_x_discrete(name = "Automation Engine") +
  scale_y_continuous(name = "Mean Benchmark Duration Milliseconds") + 
  geom_boxplot(fill = fill, colour = line)  
p1 <- p1 +  ggtitle("Boxplot of OLAP Duration with Extended Event")
p1 <- p1 + facet_wrap(~XE)
p1
summary(subset(ca_OLTP$BatchDurationMS,ca_OLTP$Description2 == "XE ON"))
summary(subset(ca_OLTP$BatchDurationMS,ca_OLTP$Description2 == "XE OFF"))
summary(subset(ca_OLAP$BatchDurationMS,ca_OLAP$Description2 == "XE ON"))
summary(subset(ca_OLAP$BatchDurationMS,ca_OLAP$Description2 == "XE OFF"))

Use Case - Hekaton Benchmark

A Microsoft whitepaper, 'In-Memory OLTP – Common Workload Patterns and Migration Considerations', highlights Hekaton performance and describes the workload patterns that suit in-memory OLTP. This is an independent assessment of Hekaton that includes the scripts and processes used so the result can be reproduced and is open to scrutiny.

Benchmarks were taken for OLTP and OLAP bias work loads running as a single request then again in two concurrent requests. This was an attempt to access the performance of in memory tables and compiled stored procedures on a Windows Server 2016 virtual machine with SQL Server 2016 SP1, 8 GB of RAM and 2 vCPU's. The CA benchmarking process was invoked using OSTRESS and the parameters below  to simulate first an OLTP bias workload with many small batches, then an OLAP bias workload.

Concurrent Requests@Batches@IO_Benchmark@Hek_Benchmark@NewPatternsInBatch@StressLevel@Description1@Description2
210001011OLTPTwo Concurrent Requests
210000111OLTPTwo Concurrent Requests
210101003OLAPTwo Concurrent Requests
210011003OLAPTwo Concurrent Requests
110001011OLTPOne Concurrent Request
110000111OLTPOne Concurrent Request
110101003OLAPOne Concurrent Request
110011003OLAPOne Concurrent Request

OLTP Bias Workload Results

When processing 1,000 small batches, Hekaton performance was typically four times faster and more consistent mainly due to precompiled stored procedures.

BenchmarkMin. 1st QuMedian   Mean3rd QuMax
OLTP - SQL IO0.007.0010.0016.7714.00507.00
OLTP - Hekaton0.003.003.003.814.00217.00

OLAP Bias Workload Results

When processing 10 larger batches, Hekaton performance was typically five times faster and far more consistent mainly due to lock and latch contention changes.

BenchmarkMin. 1st QuMedian   Mean3rd QuMax
OLAP - SQL IO15,05017,29423,53432,27540,17388,643
OLAP - Helaton1,3634,3235,6305,9637,43010,040

These graphs and statistics were produced using the R script below:

rm(list=ls())
ca <- read.csv(file = "c:/DataScience/CA_Hekaton_20170930.csv", header = TRUE, sep = ",", stringsAsFactors = FALSE)
# Factors
ca$StartTime <- as.factor(ca$StartTime)
ca$AutomationEngine <- as.factor(ca$BenchmarkPerspective)
ca$OLTP_OLAP <- as.factor(ca$Description1)
ca$Concurreny <- as.factor(ca$Description2)
#install.packages("ggplot2")
library("ggplot2")
fill <- "#4271AE"
line <- "#1F3552"
ca_OLTP <- subset(ca, ca$OLTP_OLAP == "OLTP")
ca_OLAP <- subset(ca, ca$OLTP_OLAP == "OLAP")
p1 <- ggplot(ca_OLTP, aes(x = AutomationEngine, y = BatchDurationMS)) + geom_boxplot()
p1 <- p1 + scale_x_discrete(name = "Automation Engine") +
  scale_y_continuous(name = "Mean Benchmark Duration Milliseconds") + 
  geom_boxplot(fill = fill, colour = line)  
p1 <- p1 +  ggtitle("Boxplot of OLTP Duration by Automation Engine - Hekaton or SQL IO")
p1 <- p1 + facet_wrap(~Concurreny)
p1
p1 <- ggplot(ca_OLAP, aes(x = AutomationEngine, y = BatchDurationMS)) + geom_boxplot()
p1 <- p1 + scale_x_discrete(name = "Automation Engine") +
  scale_y_continuous(name = "Mean Benchmark Duration Milliseconds") + 
  geom_boxplot(fill = fill, colour = line)  
p1 <- p1 +  ggtitle("Boxplot of OLAP Duration by Automation Engine - Hekaton or SQL IO")
p1 <- p1 + facet_wrap(~Concurreny)
p1
summary(subset(ca_OLTP$BatchDurationMS,ca_OLTP$AutomationEngine == "HEK" & ca_OLTP$Description2 == "One Concurrent Request"))
summary(subset(ca_OLTP$BatchDurationMS,ca_OLTP$AutomationEngine == "HEK" & ca_OLTP$Description2 == "Two Concurrent Requests"))
summary(subset(ca_OLTP$BatchDurationMS,ca_OLTP$AutomationEngine == "IO " & ca_OLTP$Description2 == "One Concurrent Request"))
summary(subset(ca_OLTP$BatchDurationMS,ca_OLTP$AutomationEngine == "IO " & ca_OLTP$Description2 == "Two Concurrent Requests"))
summary(subset(ca_OLAP$BatchDurationMS,ca_OLAP$AutomationEngine== "HEK" & ca_OLAP$Description2 == "One Concurrent Request"))
summary(subset(ca_OLAP$BatchDurationMS,ca_OLAP$AutomationEngine== "HEK" & ca_OLAP$Description2 == "Two Concurrent Requests"))
summary(subset(ca_OLAP$BatchDurationMS,ca_OLAP$AutomationEngine == "IO " & ca_OLAP$Description2 == "One Concurrent Request"))
summary(subset(ca_OLAP$BatchDurationMS,ca_OLAP$AutomationEngine == "IO "& ca_OLAP$Description2 == "Two Concurrent Requests"))
Use Case - SQL Azure Benchmarking
The ‘Azure SQL Database DTU Calculator’ service (see references) recommends a 'Performance Level’ for a given SQL Server workload. A representative workload is run on an existing server and performance counters are captured, these counters are then supplied to the DTU Calculator which makes a recommendation. A ‘DTU’ is a ‘Database Throughput Unit’, a suggestion of the CPU & IO resource requirements and service level, for a given workload on a given server. It’s calculated by capturing the performance counters below while the server is under load:

  • Processor - % Processor Time
  • Logical Disk - Disk Reads/sec
  • Logical Disk - Disk Writes/sec
  • Database - Log Bytes Flushed/sec

The CA benchmarking process (see references) was run on a local Hyper-V virtual server with Windows Server 2016 and SQL Server 2016, with 2 vCPU's and 8 GB of RAM. The performance counters were captured during the benchmark and passed to the DTU Calculator which suggested Premium - P2.

The same CA benchmark workload was then run on the VM & Azure (P2) using the parameters below, the results from both were saved as a single csv file and imported into R Studio.

Concurrent Requests@Batches@IO_Benchmark@CPU_Benchmark@NewPatternsInBatch@StressLevel@Description1
21010502VM
21001502VM
21010502Azure
21001502Azure

CPU Benchmark

SQL Azure CPU performance, at the recommended P2 level, was close to twice as fast as the VM.

BenchmarkMin. 1st QuMedian   Mean3rd QuMax
CPU - VM11,73321,00422,89022,58124,70330,307
CPU - SQL Azure7,67312,21414,57014,31716,51420,126

IO Benchmark

SQL Azure IO performance, at the recommended P2 level, was more than twice as fast as the VM.

BenchmarkMin. 1st QuMedian   Mean3rd QuMax
IO - VM10,54625,00926,31526,29028,99832,850
IO - SQL Azure7,2349,04410,56210,29911,25414,610

The T-SQL script below was then run in two concurrent query analyser sessions to produce the CA Benchmark workload, first on the VM then SQLAzure.

EXECUTE dbo.CA_Benchmark @Batches = 10 ,@CPU_Benchmark = 1 ,@IO_Benchmark = 1 ,@NewPatternsInBatch = 50 ,@DisplayPatterns = 0 ,@Initialize = 1 ,@StressLevel = 2 ,@Description1 = 'xx'

The results from the CA benchmarks in Azure and the VM were combined and saved as a csv file then imported into R and graphed using the script below:

rm(list=ls())
ca <- read.csv(file = "c:/DataScience/CA_Azure_20170930.csv", header = TRUE, sep = ",", stringsAsFactors = FALSE)
# Factors
ca$StartTime <- as.factor(ca$StartTime)
ca$BenchmarkPerspective <- as.factor(ca$BenchmarkPerspective)
ca$ServerName <- as.factor(ca$Description1)
#install.packages("ggplot2")
library("ggplot2")
ca_CPU <- subset(ca,ca$BenchmarkPerspective == "CPU")
ca_IO <- subset(ca,ca$BenchmarkPerspective == "IO ")
fill <- "#4271AE"
line <- "#1F3552"
# CPU Benchmark
p1 <- ggplot(ca_CPU, aes(x = ServerName, y = BatchDurationMS)) + geom_boxplot()
p1 <- p1 + scale_x_discrete(name = "Server Name") +
  scale_y_continuous(name = "Mean Benchmark Duration Milliseconds") +
  geom_boxplot(fill = fill, colour = line)
p1 <- p1 +  ggtitle("Boxplot of CPU Benchmark Duration by Server Name")
p1 <- p1 + facet_wrap(~ BenchmarkPerspective)
p1
# IO Benchmark
p2 <- ggplot(ca_IO, aes(x = ServerName, y = BatchDurationMS)) + geom_boxplot()
p2 <- p2 + scale_x_discrete(name = "Server Name") +
  scale_y_continuous(name = "Mean Benchmark Duration Milliseconds") +
  geom_boxplot(fill = fill, colour = line)
p2 <- p2 +  ggtitle("Boxplot of IO Benchmark Duration by Server Name")
p2 <- p2 + facet_wrap(~ BenchmarkPerspective)
p2
summary(subset(ca_CPU$BatchDurationMS,ca_CPU$ServerName == "VM"))
summary(subset(ca_CPU$BatchDurationMS,ca_CPU$ServerName == "Azure"))
summary(subset(ca_IO$BatchDurationMS,ca_IO$ServerName == "VM"))
summary(subset(ca_IO$BatchDurationMS,ca_IO$ServerName == "Azure"))
Use Case - Synthetic Transaction
The CA Benchmark procedure can executed without parameters to take a small benchmark or 'Synthetic Transaction'. The graph below shows a busy development server hosting three instances of SQL Server 2012 under constant and consistent load. On the afternoon of the 15th September, a change was made and the available CPU count went from 20 to 60 for all three (non-affinitized) instances.  The effect of the extra CPU's was evident in the improved consistency of the CA Benchmark times.

Suggested Usage

  • Schedule a SQL Agent job to run 24 hours before any event likely to impact a SQL Server instance, take a default CA Benchmark (~5 seconds) at five minute intervals, disable the job 24 hours after the change. A comparison of the before/after results should show whether the change had any impact on the performance of client requests to the instance.
  • After installing new SQL Server instances, take a predefined (SLA) CA Benchmark that places a representative load. This result in isolation isn't very interesting but subsequently, any other new SQL Server installations can have the same CA Benchmark taken and the relative performance of the servers can then be compared.

References

License

SQL Server Cellular Automation Benchmarking is licensed under the MIT license, a popular and widely used open source license.

Copyright (c) 2017 Paul Brewer

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Appendix

The PowerShell script below was used to perform the benchmark tests in this article, OSTRESS is part of the RML tools for SQL Server (http://www.microsoft.com/en-gb/download/details.aspx?id=4511). Each line in the script was highlighted and run individually as running in a single batch caused unpredictable results:

cd "C:\Program Files\Microsoft Corporation\RMLUtils"
Set-Location .
cls
######################## 
# Hekaton / SQL
# Two concurrent requests
# OLTP
./ostress -SVM01 -dCA_SQLHekaton -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1, @Batches = 1000, @NewPatternsInBatch = 1, @StressLevel = 1, @Description1 = 'OLTP', @Description2 = 'Two Concurrent Requests', @Initialize = 1;" -o"C:\temp\log1"
./ostress -SVM01 -dCA_SQLHekaton -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @Hek_Benchmark = 1, @Batches = 1000, @NewPatternsInBatch = 1, @StressLevel = 1, @Description1 = 'OLTP', @Description2 = 'Two Concurrent Requests', @Initialize = 1;" -o"C:\temp\log2"
# OLAP
./ostress -SVM01 -dCA_SQLHekaton -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1,@Batches = 10, @NewPatternsInBatch = 100, @StressLevel = 3, @Description1 = 'OLAP', @Description2 = 'Two Concurrent Requests', @Initialize = 1;" -o"C:\temp\log3"
./ostress -SVM01 -dCA_SQLHekaton -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @Hek_Benchmark = 1, @Batches = 10, @NewPatternsInBatch = 100, @StressLevel = 3, @Description1 = 'OLAP', @Description2 = 'Two Concurrent Requests', @Initialize = 1;" -o"C:\temp\log4"
######################## 
# Hekaton / SQL
# One concurrent request
# OLTP
./ostress -SVM01 -dCA_SQLHekaton -n1 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1, @Batches = 1000, @NewPatternsInBatch = 1, @StressLevel = 1, @Description1 = 'OLTP', @Description2 = 'One Concurrent Request', @Initialize = 1;" -o"C:\temp\log1"
./ostress -SVM01 -dCA_SQLHekaton -n1 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @Hek_Benchmark = 1, @Batches = 1000, @NewPatternsInBatch = 1, @StressLevel = 1, @Description1 = 'OLTP', @Description2 = 'One Concurrent Request', @Initialize = 1;" -o"C:\temp\log2"
# OLAP
./ostress -SVM01 -dCA_SQLHekaton -n1 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1,@Batches = 10, @NewPatternsInBatch = 100, @StressLevel = 3, @Description1 = 'OLAP', @Description2 = 'One Concurrent Request', @Initialize = 1;" -o"C:\temp\log3"
./ostress -SVM01 -dCA_SQLHekaton -n1 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @Hek_Benchmark = 1, @Batches = 10, @NewPatternsInBatch = 100, @StressLevel = 3, @Description1 = 'OLAP', @Description2 = 'One Concurrent Request', @Initialize = 1;" -o"C:\temp\log4"

######################## 
# SQL
# MAXDOP=0
./ostress -SVM01 -dCA_SQLAzure -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1, @Batches = 1000, @NewPatternsInBatch = 1, @StressLevel = 1, @Description1 = 'OLTP', @Description2 = 'MAXDOP=0', @Initialize = 1;" -o"C:\temp\log1"
./ostress -SVM01 -dCA_SQLAzure -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1, @Batches = 10, @NewPatternsInBatch = 100, @StressLevel = 3, @Description1 = 'OLAP', @Description2 = 'MAXDOP=0', @Initialize = 1;" -o"C:\temp\log1"
# MAXDOP=1
./ostress -SVM01 -dCA_SQLAzure -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1, @Batches = 1000, @NewPatternsInBatch = 1, @StressLevel = 1, @Description1 = 'OLTP', @Description2 = 'MAXDOP=1', @Initialize = 1;" -o"C:\temp\log1"
./ostress -SVM01 -dCA_SQLAzure -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1, @Batches = 10, @NewPatternsInBatch = 100, @StressLevel = 3, @Description1 = 'OLAP', @Description2 = 'MAXDOP=1', @Initialize = 1;" -o"C:\temp\log1"
######################## 
# SQL
# XE OFF
./ostress -SVM01 -dCA_SQLAzure -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1, @Batches = 1000, @NewPatternsInBatch = 1, @StressLevel = 1, @Description1 = 'OLTP', @Description2 = 'XE OFF', @Initialize = 1;" -o"C:\temp\log1"
./ostress -SVM01 -dCA_SQLAzure -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1, @Batches = 10, @NewPatternsInBatch = 100, @StressLevel = 3, @Description1 = 'OLAP', @Description2 = 'XE OFF', @Initialize = 1;" -o"C:\temp\log1"
# XE ON
./ostress -SVM01 -dCA_SQLAzure -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1, @Batches = 1000, @NewPatternsInBatch = 1, @StressLevel = 1, @Description1 = 'OLTP', @Description2 = 'XE ON', @Initialize = 1;" -o"C:\temp\log1"
./ostress -SVM01 -dCA_SQLAzure -n2 -r1 -Q"EXECUTE [dbo].[CA_Benchmark] @IO_Benchmark = 1, @Batches = 10, @NewPatternsInBatch = 100, @StressLevel = 3, @Description1 = 'OLAP', @Description2 = 'XE ON', @Initialize = 1;" -o"C:\temp\log1"
All results were collected using the T-SQL script below, saved as csv files and passed to the R scripts.
SELECT CONVERT(VARCHAR(16),[BatchStartTime],120) AS StartTime
      ,[BatchDurationMS]
      ,[BenchmarkPerspective]
      ,Description1
      ,Description2
      ,Description3
FROM [dbo].[CA_BenchmarkResults]

Resources

Rate

4.38 (8)

You rated this post out of 5. Change rating

Share

Share

Rate

4.38 (8)

You rated this post out of 5. Change rating