• Thanks for the replies.

    Steve Jones - SSC Editor (10/18/2013)


    I prefer R1 or R10, though I certainly understand economics.

    RAID1 for everything?

    If you convert to r5, and that might work fine for your workload, I wouldn't add 2 drives, I'd add 3 and have a hot spare. If R5 fails, and you have multiple drives from the same vendor/batch/generation, it is entirely possible you lose another one in short order. I'd have a spare in there, alerts set up, and kick of a rebuild ASAP of the failed drive, which I'd then replace quickly.

    That does sound good but I don't think we have that kind of setup yet. We're a long way behind what most people consider the bare minimum! Which makes me think I should be concentrating on resilience first and foremost.

    You mentioned this is primarily a read heavy box, correct? If that's the case then going to RAID 5 might be the move to make if cost is a concern but keep in mind changing the disk subsystem is not something that should be taken lightly. Although writes aren't the majority here understand that they will suffer going from RAID 1 to RAID 5 given the penalty by design.

    For every one write, you have 4 operations to do:

    A) Read the data to be changed

    B) Read the corresponding parity

    C) Write the change

    D) Write the updated parity

    A lot of good advice already shared but I wanted to chime in with this factor considering you're likely not expanding space because you want to but rather you want to have capacity for more data to be written to those disks

    This was what I was worried about. This is a database that mostly has reads, but a few times a month there are some semi-automated processes (taking place during business hours) that are very write-heavy. From what you're saying above, if we moved to RAID5 there would be times when the system could theoretically grind to a halt, whereas if we were on RAID1 it might perform a little faster- is that roughly it?

    Thanks again. I appreciate the help.