• Penalties in writing on RAID: during the nineties an issue on server hardware, 2011 an issue on gamer PC.

    In former times it was the calculation and the limited processing speed of hardware raid controllers leading in reduced data transfer rates. E.g. Adaptec AAA2400 has 4x UDMA 33 channels and supports natively Raid5.

    In stripeset mode on 4 discs the controller writes about 126 MB/sec, that's the limit for PCI 32 bit 33 Mhz. In Raid 5 mode the CPU on the controller (Intel I960) gets really hot, and it writes with about 30 MB/sec.

    This penalty is more or less gone, at least if customers don't buy their servers at Walmart having an ICH10R built in.

    Current quality hardware is able to calculate redundancy in realtime. Having 4 disks a simple exor algorithm can be used, but ICH10R is much faster running on odd disk numbers. That is cheap scrap, and an admin will die in hell when he uses ICH10R for productional use.

    I can also understand the Oracle guys living with a dozen independant disks in their system, but forgetting that this is fault multiplying and not fault tolerating.

    Both strategies (Raid and indiv. disks) are able for offline recovery if hardware fails, and Raid5 on cheap hardware is nearly intolerable.

    In general I would talk about reliability and not about "I like / I dislike Raid". Enterprise storage has to be reliable, this is a fact. It has no flaws during a rebuild and just works.

    Spending 100.000 $ for an enterprise SAN doesn't leave gaps, is fast enough so we're discussing about small companies and 1000$-5000$ entry class servers, barely worth being called servers.

    Anyway, a good Raid controller should NOT show any kind of penalties nowadays, nor even or odd disk number affinities.

    In principle raid 5 with n+1 layout can die easily if a disk fails and during the rebuild the next disk fails. Raid 5 n+2 could be a better (and more expensive) solution, but concerning smaller systems having only 4 disks there are 2 idle disks in Raid 5 but also in Raid 10 setup. Raid 10 will die later in this config, but the risk of failing disks is highly increased by just buying a bulk of disks from the same producitonal date.