I would recommend the following scenario:
• Restore the production database on the testing server and simulate the similar workload, make sure that tempdb and a copy of the production database located on the same drive
• In the Windows Performance monitor create ~24 hours trace and capture Disk reads/sec and Disk writes /sec counters, save results as tsv (for example). Open the trace in the excel file and calculate average and stdev values for both counters. Let’s assume you have avg+stdev ~ 300 Reads/sec and ~300 Writes/sec
• Now you can calculate the storage.
Use the formula: (#Reads + #Writes* Penalty)/ # physical drives = drive throughput (I assume ~120 IOp/sec on SATA ; ~ 200 IOp/sec on SSD, difficult to find the SSD parameters)
RAID penalty on RAID 1 =2
On the 1st RAID 1, based on SSD (300 Reads/sec + 300 Writes/sec*2) / #physical drives =200 => need ~4.5 drives in RAID 1 , in real configuration =>6 drives
On the 2st RAID 1, based on SATA (300 Reads/sec + 300 Writes/sec*2) / #physical drives =120 => need minimum 7 drives in RAID 1 (8 drives)
Afterwards on Production measure the actual data with the performance monitor – use Avg Disk Sec/read and AVG Disk sec/write and check the 50 ms threshold
Both RAID 1 arrays , based on 2 physical drives, have a very limited capacity , around 100 reads/sec and 150 writes/sec.
On one of the arrays you have to install the OS and SQL server binaries, one the second the database files. Maybe you can combine the database logs and OS on SSD drives and database files on SATA, but first of all you have to estimate the load.