EMC VNX SAN RAID5

  • Hi Everyone,

    We are in the process of purchasing a new SAN for our entire organization: SQL Server, file servers, VMs, etc. The consultants we are working with are recommending a solution which is entirely RAID5. The EMC VNX SAN would include 3 tiers of storage (SSDs, 10k SAS, 7.5k NL SAS) and automatically moves LUNs around based on the IOPS requirements. We'd also have the ability to "attach" a LUN to a specific tier if we wanted to. It will also have 400GB of FAST cache.

    Our primary database is our EHR and is 70GB right now, plus we have Great Plains and a number of other databases supporting IT infrastructure. All-in-all not a huge database footprint, but we keep adding more.

    Based on past experience (research and on the job) i have avoided RAID5 in the past because RAID10 simply had much better IO performance. I'm hesitant to just go with their recommendation since it represents a deviation from past experience and also what nearly every article says about SQL Server drive performance, but then again they have the expertise in new SAN tech.

    Just curious to hear what other folks have been doing and if in your humble opinion this seems like a logical solution

    TIA!

    -Dan

  • Not sure about EMC but we are using Nimble Storage Hybrid Flash storage system where I'm getting 15 K IOPs and noticing awesome storage performance switched from NetApp.

    [font="Tahoma"]
    --SQLFRNDZ[/url]
    [/font]

  • Is that with a RAID5 disk array or RAID10?

    Wish we could go with all Flash, but not sure the budget is right for that!:-)

  • dan-404057 (6/5/2015)


    Hi Everyone,

    We are in the process of purchasing a new SAN for our entire organization: SQL Server, file servers, VMs, etc. The consultants we are working with are recommending a solution which is entirely RAID5. The EMC VNX SAN would include 3 tiers of storage (SSDs, 10k SAS, 7.5k NL SAS) and automatically moves LUNs around based on the IOPS requirements. We'd also have the ability to "attach" a LUN to a specific tier if we wanted to. It will also have 400GB of FAST cache.

    Our primary database is our EHR and is 70GB right now, plus we have Great Plains and a number of other databases supporting IT infrastructure. All-in-all not a huge database footprint, but we keep adding more.

    Based on past experience (research and on the job) i have avoided RAID5 in the past because RAID10 simply had much better IO performance. I'm hesitant to just go with their recommendation since it represents a deviation from past experience and also what nearly every article says about SQL Server drive performance, but then again they have the expertise in new SAN tech.

    Just curious to hear what other folks have been doing and if in your humble opinion this seems like a logical solution

    TIA!

    -Dan

    The RAID5 array will essentially be carved up a bit like a cake and LUNs provisioned from here, the cache size seems large enough to be able to support most workloads, however, you should test this once your chosen configuration is in place. The cache configuration should be designed to ensure that all writes hit the cache itself and not the disk array. It all depends on how the cache will be distributed\configured though.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Hi Dan,

    Did you come to a conclusion on your raid configuration?

    My advice:

    My EMC rep claimed that raid 5 has better performance because you would have more available spindles to write. Now, depending on the number of disks you have, e.g. 12 disks in raid 5 gives you 11 spindles (11+1 parity), but in raid 10, only 6 spindles(6+6 mirrored).. however, considering that raid 5 has a write hit -4x due to parity writes, the performance gain on spindles is overshadowed by slow block writes. Stick with your gut, raid 10 still out performs raid 5 in most cases. You just lose out on storage. the few cases where raid 5 has outperformed raid 10 are rare, and not common in a mixed environment. The best solution? get another drive tray to your VNX so that you have an additional 12 drives, give you 6 more spindles in raid 10! Expensive i know, but worked out for us in the long run.

  • We ended up going with an IBM v5000 with RAID10. The price for the IBM was only a couple thousand more than the EMC with RAID5. The configuration includes tiered storage with SSD, 15k and 10k drives for various IO requirements.

    Overall projected IO throughput was slightly better with the IBM.

    In all honesty the EMC vendor wasn't interested in selling us anything but RAID5. They threw together a RAID10 config at the last minute, but it felt pretty half-baked.

    Installing it soon. Fingers crossed the migration goes smoothly from our existing SAN

  • Sounds like an excellent solution. Let us know how well the IBM turned out. Glad you stuck with your gut. Although, i'm sure that tier of SSD will scream!

  • Good choice. Do not forget to check IBM red books for best SQL Server storage configuration. They usually give good advise, specially if you will use hybrid or multi-layer configuration.

  • dan-404057 (8/26/2015)


    We ended up going with an IBM v5000 with RAID10. The price for the IBM was only a couple thousand more than the EMC with RAID5. [highlight=#ffff11]The configuration includes tiered storage with SSD,[/highlight] 15k and 10k drives for various IO requirements.

    This is what was missing from your other solution. That Large cache size should have ensured your write I/O did not slow down you TPS. But that cache is only as fast as the path to it.

    [highlight=#ffff11]the EMC vendor wasn't interested in selling us anything but RAID5. [/highlight]

    Of course because they sell more spindles in that configuration. You need more disks that wear out faster because of all the parity writes. If you have the TMP DB and tran log files on Raid 5 this is double true no matter how the write cache is configured or where it is located.

    I mention these things because I have seen SAN's that should support Multipath 100k IOPS not support 10k or 20MBps because of bad I/O path and Cache configuration.

    Currently work with a SAN that does very well with terrible hardware because it is configured correctly.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply