http://www.sqlservercentral.com/blogs/stratesql/2013/04/10/staggering-sqlio-sessions-across-multiple-volumes/

Printed 2014/09/02 02:04PM

Staggering SQLIO Sessions Across Multiple Volumes

By StrateSQL, 2013/04/10

SQLFileFoldersIn a previous post, I discussed running SQLIO against multiple files.  The purpose in using multiple files is being able to test not just a single volume’s capability to deliver data from storage, but to test the storage as a whole.  Doing so is important since, what ever application, in my case SQL Server, is running on the server will be using the entirety of the hardware, not just a single path to a single volume.

But if you plan to use multiple volumes for a SQL Server deployment with multiple volumes present, it is also important to know the performance of each volume and the point in which adding those volumes into the mix leads to internal contention.  As an example, if I know that the performance of any single volume can support 2 GB/s, with three volumes, would I then receive 6 GB/s of throughput.  While in the perfect world, this would be the case, we don’t live in a perfect world.

To test these scenarios, I’ll often break down and run multiple SQLIO tests across each of the necessary volumes.  And since I am a generally lazy person that  wants precision without effort, I leverage PowerShell with SQLIO to accomplish this task.  In the script provided in Listing 1, the settings for the SQLIO session are set.  A couple of things to be aware of, the volumes that will be tested are provided in the $volumes array.  Also, there is a $seconds variable to determine the length of the test and an $offsetSeconds variable to determine how long to wait before starting the next SQLIO test.  The script then uses start-process to launch SQLIO and sleeps until it needs to start the next session.


#Listing 1. PowerShell SQLIO Script

sl "C:\Program Files (x86)\SQLIO"
cls

$iOType = "R"
$stripe = "random"
$threads = 2
$outstanding = 4
$blockSize = 32
$seconds = 60
$volumes = ("C","F","G")

$offsetSeconds = 20

foreach ($volume in $volumes) {
$filename = "$($volume):\temp\testfile.dat"
$output = ".\sqlio_output_$($volume).log"
$arglist= "-k$iOType -f$stripe -t$threads -o$outstanding -b$blockSize -BH -LS $filename -s$seconds "

start-process .\sqlio.exe -Passthru -ArgumentList $arglist -RedirectStandardOutput $output
write-host $arglist
Start-Sleep $offsetSeconds
}

While looking at the pure throughput in the numbers that SQLIO provides, using this script can help identify information that is not provided through those numbers.  For instance, reviewing the Disk Reads Bytes/sec values in Performance Monitor offers some interesting tidbits.  In Figure 1, the colored zones represent processessing on following volumes:

SQLIO Scaling
Figure 1. Performance Monitor Output

What’s interesting in these results, is that the combination of the three drive in the ramp-up provides relatively stable and linear growth.  But when the volumes are removed, the performance becomes more chaotic.  Another thing is that the addition of the G: drive adds about 40 to the totals on the chart, but when just the G drive is processing, the values range from 50 to about 110.  You might be able to squeeze more throughput out of the drives with a better IO system on the server, or in this case my laptop.

Using a single SQLIO session and just looking at the numbers in SQLIO can be useful when you need to know the throughput for a system.  Often times though, the linear growth expected through additional volumes isn’t seen and it is necessary to determine where the breaking points are.  At what point do you no longer get the lift in IO that is expected?  Running SQLIO in a staggered PowerShell script, as provided in this post, can help to identify where the breakdown is occurring.


Copyright © 2002-2014 Simple Talk Publishing. All Rights Reserved. Privacy Policy. Terms of Use. Report Abuse.