RAID configurations

  • We are in the process of putting a new database cluster in place that will be connected to an MSA 2212fc SAN. We have been advised to use RAID 50 over all drives on the system and allow the SAN to keep the data seperate and in cache.

    I have always tended to use local drives or when using a san the LUNs have always been on RAID 10 for the database files and logs.

    Has anyone got any expereince using RAID 50 at all?

    Cheers

    John

  • I don't think I'd use 50. It'll have the same problem as 5 - poor write performance due to the parity stripe. Each write may incur up to ((no of disks in array) - 2) reads and 1 extra write.

    No personal experience with it though.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I also would avoid RAID 50 for databases. Who recommended it? SAN vendors should have had enough experience with SQL databases to help you make a good decision, so I would typically go with whatever configuration your particular SAN vendor recommends. In your case, the recommendation does not sound right.

  • RAID50 is really only useful as it would allow for 3 drives to fail before you would lose your data on the third, rather than losing data when 2 drives fail on a RAID5. Sounds like a support or hardware concern for the vendor who doesn't think their hardware is up to spec, or that they couldn't get out to replace a drive within a short time window.

    Performance is not RAID10 like, but really it all depends on the kind of throughput you need on the storage to decide whether it's worth investing in the cost (although you could probably go RAID10 with a RAID0 mirror for not much difference in the cost).



    Shamless self promotion - read my blog http://sirsql.net

  • RAID 10 or RAID 01, not 50, would be my recommendation.

  • RAID 10 if you can RAID 5 if you want to keep it simple, RAID 0+1 involves mirroring over a striped set and is not as tolerant as RAID 10

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • I use MSA 2212fc with raid 5 at the moment and they are fine for our applications - but you need to decide how fast you need your db to be and do some testing.

  • We just upgrade from direct storage to a Dell EqualLogic PS5000 (14x15k 450GB SAS + 2 Hot Spares).

    We were running Raid 10 on the direct storage (4x15k 150gig sas) but when the local dell reps came in and setup the EqualLogic (you can't set it up yourself they make you use a local dell rep to do it BTW) they set it up as a Raid 50 on all 14 drives. I questioned them repeatedly about it but they claimed that the performance boost you'd see from a 14 drive raid50 array would more than exceed the performance needs for our system.

    Been about 2 weeks now and so far I/O has not been an issue from my monitoring. We're back to the usual CPU limitations...

    Now to get the boss to get a nice quad by quad CPU system 🙂

    Just for reference we're running SQL 2005 Enterprise x64 on a win 2k3 x64 Enterprise box (dual by dual core w/ 16gb ram). ~100GB primary DB with 4 other 10-20GB supporting DBs. We average 200-250 active connections running at any time.

  • Your SAN controller will cache the writes to the disk so the performance hit on RAID 5 is not as bad as it used to be on direct attached storage, but there may still be some there as GilaMonster indicated earlier. You need to determine what problem you are trying to solve. So what is RAID 50 but a mirrored RAID 5, right? If redundancy is the problem RAID 5 has that built in PLUS the SAN has hot spare capability to allow multiple disk failures (number of spares less 2) before the RAID 5 array is offline. If I/O is the issue then RAID 10 is a better solution.

    For SQL we have always separated tempdb onto their own arrays within the SAN as well as transaction logs. We have a number of SQL databases (300 GB+) in the SAN where the data is on RAID 5 (always use an ODD number of spindles on RAID 5, try to use not less than 5 and not more than 9 spindles) with a moderate I/O load (150 MB/sec) without I/O performance issues.

    We typically use separate zones in our SAN based on the app: Exchange, Oracle, SQL, TSM to prevent one app smashing the others. No experience with RAID 50 but I would not go that route.

  • Cheers for the input guys.

    The system that we are looking to put in place is MS Dynamics AX 2009 running on a SQL 2008 Enterprise A/P cluster.

    Potentially 150 system users connected, though this is likely to be closer to 100 concurrent.

    I think one of the reasons behind the RAID 50 choice was capacity to cost, (12x 300GB 15K SAS). The hardware arrives tomorrow from the looks of it so I will just have to see how it behaves with a number of IO tools I guess.

    Not the biggest user base really so I guess we shold be ok.

  • RAID 5 will only tolerate 1 disk failure at any time even if there are 3 spares. RAID 50 does allow multiple failures AFAIK but RAID 10 is a better option. Writes to the data files are random anyway, that should fit in well with RAID 5 really

    rhamnusia (12/3/2008)


    So what is RAID 50 but a mirrored RAID 5, right?

    no, RAID 50 is striping with parity across striped sets not mirroring

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • What was the result of this in the end. Did you have any issues with Raid50 ?

  • In the end I convinced the powers that be on a different drive configuration, basically haggled the hardware vendor down enough to get the number of drives for a good price.

    Db - 10 in raid 10

    TempDb - 4 disks in raid 10

    Backups - 2 disks raid 1

    SSAS - 4 disks raid 5 (this will be doubling once we start running SSAS in anger)

    LUNs are grouped into access types and then presented via the relevent SAN controller. Seem to have not had any peroblems with performance and I am happier as I have a known setup for my drives.

  • 1) can't a RAID50 have half + 1 drives fail??

    2) make sure you do things like sector alignment and 64K cluster size formatting.

    3) I believe RAID 50 can be significantly faster than RAID5 since as long as ONE side of the mirror has acknowledged a write the system can respond "I got it move on". You also get the mirror read benefit of either side being able to serve up reads too, right?? Of course I could be completely off in left field here . . . 🙂

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Benchmark, benchmark, benchmark.

    Microsoft's "SQLIO" provides at least some very controlled benchmark capabilities for random vs. sequential reads and writes (each independent of the other), while IOMeter allows for mixed loads. There's also Microsoft's SQLIOSim... or installing SQL Server, building some reasonable tables, and running a workload against Profiler. Have Perfmon running, and keep an eye on latency (avg sec per read/write) and queue lengths, as well as raw transfer speed and page life expectancy.

    For better or for worse. all the theory in the world often fails to match reality, for a wide variety of reasons. Different storage configurations have different performance characteristics.

    I strongly suggest setting up some benchmarking regimen, and then testing various configurations:

    The data size required in RAID 10, 5, 6, and 50. (this lets you see what happens when you try to save spindles for a given data size)

    The number of spindles you have available for whatever LUN/RAIDGROUP in RAID 10, 5, 6, and 50 (assuming standard drives, ideally short-stroking setups that would otherwise give more space). (this lets you see what happens when you try to optimize what you have available for IOPS load)

    You need enough space... and you need enough IOPS capability, as well.

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply