• peter-757102 (4/7/2011)


    coldsteel2112 (4/6/2011)


    peter-757102 (1/10/2011)


    At my company we had two times now a disk in a RAID 5 broke and the array could not restore itself and had to use backups to continue working on another server. An identical issue with RAID 10, never cause any issues or significant downtime.

    I noticed this post and had to respond. In a properly configured RAID 5, a single disk failure would not cause a failure of the array. And even if there was a failure of the writing of the data to the replaced disk, no one would have known it because the array would have kept chugging along like nothing happened.

    So, I'm wondering if your aversion to RAID 5 is simply based on misunderstanding? What you've described cannot happen in the real world.

    Theory and practice are two entierliy different things.

    The RAID controller choked early during the repairs and never recovered, we had to send the disks to a specialised company to retrieve the data!

    And te time during repairs of a raid 5, is VERY risky for the validity of your data. At that point there is NO redundance and the disks themself cannot be read on their own either, so ANYTHING that can go wrong will cause data loss or corruption.

    And with todays big disks, the chance of something like that that happening is simply to large.

    To even considder using raid 5 today on large spinning disks is IMHO just on that basis alone a bad idea!

    Now, for SSD's there might by an exception as they are so fast and more reliable, rebuilding can be fast too.

    But even then, I strongly prefer the much simpler and transparent method of mirroring!

    So if the controller failed, then no matter what flavor of RAID you were running, you still would have had the same issue. Any disks in a failed array would still need a rebuild by the controller once the new disk was installed. So to blame RAID 5 on your issue is a bit unfounded.

    Not saying that RAID 5 is better or worse, but I've been using for the past 15 years on hundreds of servers (internally) and have never once seen one disk fail, along with another immediately after. Not saying that could never happen, just telling you my experience. As long as you have a spare configured and the controller set to high priority on the rebuild, you mitigate the length of time the array is in a vulnerable state.

    Now, with all that said, I agree 😛 I would much prefer RAID10 or if I have the capacity, RAID 50. Both posses a greater amount of fault tolerance then 5. So unless there is a space issue (only allowed 4 drives for instance) and you need the most out the capacity of those 4 drives (4 - 500gb drives: RAID 5: 1862GB vs. RAID 10: 931GB), there is no real reason to choose RAID 5.

    So, like all things in life, its about how much money are you willing to throw at it. 😉