I am working on migrating databases on multiple servers to Amazon. I have a question regarding IOPS. I can determine IOPS based on drives on my SQL Servers which I have done. So for example, the log drive is 1000 IOPS, the Data Drive is 2000 IOPS, and the temp drive is 1000 IOPS. Amazon requests an IOPS setting starting at 1000. I need to determine the amount required. Would it simply be 4000 IOPS or would it be 2000 IOPS? I'm guessing there are multiple trains of thought and I would appreciate any assistance.
I'm also hoping I did the calculations right ((DRIVE IO which is around 125 estimated for 10k drives) * # of drives) * Percent read) +((DRIVE IO which is around 125 estimated for 10k drives) * # of Drives) * IO Penalty)
So say we would have a RAID-5 Array with five 10k Drives and R/W of 50%. That would calculate as ((125*5)*.50) + (((125*5)*.50)*4)