Sorry, I did mean SQLIO - but for these tests I am only using IOMeter.
Results below for RAID 5
MAX IOPS - uses 4KB transfer request size, 100% reads, 100% sequential dist.:
TOT IO/sec: 121,432
Total MB/sec: 479
Average IO Response Time: 0.13
MAX IO Response 1.85
CPU: 26.5
MAXIOPS with 90% read, 10% write ratio - uses 4KB transfer request size, 90% reads, 10% sequential dist.:
TOT IO/sec: 7000
Total MB/sec: 90
Average IO Response Time: 2.28
MAX IO Response 37.2
CPU: 27.6
I am very surprised with the outcome of the RAID 10, as I thought the performance would be slower:
MAX IOPS - uses 4KB transfer request size, 100% reads, 100% sequential dist.:
TOT IO/sec: 69,445
Total MB/sec: 271
Average IO Response Time: 0.23
MAX IO Response 10.77
CPU: 13.08
MAXIOPS with 90% read, 10% write ratio - uses 4KB transfer request size, 90% reads, 10% sequential dist.:
TOT IO/sec: 23,565
Total MB/sec: 92
Average IO Response Time: 0.68
MAX IO Response 49.9
CPU: 3.02
So it would seem to me that in a RAID5, the more writes you have, you lose IOPS due to the performance hit of maintaining parity across the disks, the main benefits of a RAID 5 would be if and only if you could be sure the writes were kept to an absolute minimum or for a read only file altogether.
At our company, we will have replicated data being written to this new storage array (along with other minor writes from custom tables used for reporting, etc)...so I would think that we wouls ee the best performance in keeping this new array a RAID 10.
Would you agree with this?
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience