• I may get my official membership in BAARF revoked, but in Oracle SQL tests in spring '09 with a then-new midrange IBM POWER6 server and DS-series SANs connected by dual 4GB FC HBAs, I saw no discernable IO performance difference in read or write between RAID-5 and RAID-10 setups, even under unrealistically stressed conditions (tests were throttled solely by CPUs ability to generate enough IO). I forget the exact array sizes for each, but the SAN caching was truly effective in nullifying the write performance penalty. I should also mention that we have an SVC frontending the SANs, although I'm not sure how that would affect IO performance for the testing, unless it's cache is additively effective (when it's not shuffling volumes around).

    One other knock on R5 is the array rebuilding performance hit after a lost drive is replaced. We've lost multiple drives since then (guessing due to thermal stress after AC failure), albeit no more than one at a time in any array, and I've seen zero performance effect by either failure or replacement, at least on our production Oracle box.

    When relaying this to my former boss who is BAARFier than I am, he said that they came to the same damnable conclusion. I guess technology really does change. 🙂

    That being said, there is ZERO chance I'll approve of a standalone RAID-3/4/5/6/etc config for critical production systems. It's just the BAARF in me...