RAID and Its impact on your SQL performance

  • rmechaber (5/1/2012)

    So, while I've got your attention: is there any? miniscule? significant? write-performance hit in writing to a 2-drive mirror compared to writing the same data to a single disk (assuming a hardware RAID controller)? That's what I was really inquiring about.

    Sure, it depends on the RAID controller or storage processor. If it was just one write in absolute isolation, it's going to be minimal, but as disk activity starts to pile up, you can run into problems with the bandwidth of the storage gear altogether. If the storage gear doesn't have enough connections to the drives, or if it uses a small shared connection (like 2Gb FC) across a whole lot of drives, then you can run into a bottleneck there.

  • Brent Ozar (5/1/2012)

    rmechaber (5/1/2012)

    So, while I've got your attention: is there any? miniscule? significant? write-performance hit in writing to a 2-drive mirror compared to writing the same data to a single disk (assuming a hardware RAID controller)? That's what I was really inquiring about.

    Sure, it depends on the RAID controller or storage processor. If it was just one write in absolute isolation, it's going to be minimal, but as disk activity starts to pile up, you can run into problems with the bandwidth of the storage gear altogether. If the storage gear doesn't have enough connections to the drives, or if it uses a small shared connection (like 2Gb FC) across a whole lot of drives, then you can run into a bottleneck there.

    Thanks Brent. I don't design SANs or NAS at all, but I like to have an understanding of the performance bottlenecks that can arise.


  • Good article altought was corrected by BrentO. I recently attended to at South Florida group meeting where Brent spoke about SSD. It would be interesting to see what should be the impact on RAID configuration on SSD.

    Thanks Greg for posting this article and Brent for complementing it.

  • In my situation I have a table that is partioned into 12 partiones. So I have different file for each partioned group on the same lun going to the 5 disk stripped set. Would it make more sense to break it up into 12 logical luns. Would SQl server and Windows Server access faster because my SASS channel is 6g and I am only utilizing up to 1g (in monitoring it). This is a read only Data set.

  • jonalberghini (5/1/2012)

    In my situation I have a table that is partitioned into 12 partitions. So I have different file for each portioned group on the same lun going to the 5 disk stripped set. Would it make more sense to break it up into 12 logical luns. Would SQl server and Windows Server access faster because my SASS channel is 6g and I am only utilizing up to 1g (in monitoring it). This is a read only Data set.

    If you have 1 5-disk stripe with no redundancy at all I doubt your partitioning is gaining you anything. You are likely hitting your max reads on the disks as is. The stripe set can only operate 1 read at a time. You will have no parallel reads occurring. I would suggest you have separate stripe sets at that point to allow for parallel access, but that would depend on if the controller and pathing you use would also allow for the parallel access.

  • A questoin; out of curiosity:

    You quoted some PerfMon stats on your production environment.

    Given your analysis, would I be correct in assuming your produciton environment already runs an array of > 100 disks?

    In fact given your preference for RAID 1+0, are you running > 200 disks?

    No to both questions.

    1) We arent using RAID 1+0 or 0+1. We are using a NAS via NFS that implements RAID DP

    2) Our production environment Disk Queues during peak periods. Pretty excessively actually......


    Gregory A Jackson MBA, CSM

  • With the proliferation of SSD disk and its costs going down, some RAID levels with low write performances are no longer so low. Put an SSD in your life. I've done it, both home, laptop and work. Is the best money can buy right now in order to increase performance several degrees. We are planning to do it also on our local servers, first on the test server and then on the main one if evertyhing is ok.

    SSDs are also a little bit of a controversial topic mostly due to their short lifespan.

    It's certainly an option to consider though....I agree.


    Gregory A Jackson MBA, CSM

  • A few small nits to pick:

    Raid 1+0 != Raid 0+1

    If you have 8 drives in a Raid 1+0, say the 4 pairs are:

    AE (mirror)

    BF (mirror)

    CG (mirror)

    DH (mirror)

    And then those 4 mirrors are striped.

    In a Raid 0+1 set, you would have



    and then those two stripe sets are mirrored.

    You could lose 4 drives, say A,B,C, & D, and still have no data loss in either case.

    The difference is, if you lose drive A in 1+0, a 2nd drive failure only has a 1 in 7 chance of causing data loss (if you lose drive E), in 0+1 you have a 4 in 7 chance of losing data (if you lose E,F,G, or H). If you have already lost 2 drives (say A & B) with no data lost, then 1+0 has a 2 in 6 chance of losing data on the 3rd failure, while the 0+1 has 4 in 6 chance.

    If you scale it up to an array of 100 drives, the disparity is even worse, a 1 in 99 chance of losing data on your second drive failure, vs a 50 in 99 chance.

    Raid 6 is effectively the same as Raid DP (there are some differences as to which blocks are being protected by each set of parity data, but the important part remains the same, you are protected against 2 drive failures).

    Also depending on the level of hardware you are using, the write penalty of raid 5 will be largely invisible to the database due to a combination of cache and scheduling of the extra writes during a drives idle time.

    Given arrays of 4/10/and 50 drives (with a 1% per year failure rate) you would expect to see the following failure rates for the different raid types over that year:

    Raid 0: 4%/9.5%,39%

    Raid 1: .000001%/10^-20/10^-100

    Raid 5: .12%,.9%,24.5%

    Raid 6: .0024%/.072%/11.76%

    Raid 1+0: .02%/.05%/.25%

    Raid 0+1: .04%/.24%/4.9%

    Read Speed (assuming 100 operations per second on a single drive):

    Raid 0: 400/1000/5000

    Raid 1: 400/1000/5000

    Raid 5: 300/900/4900

    Raid 6: 200/800/4800

    Raid 1+0: 400/1000/5000

    Raid 0+1: 400/1000/5000

    Write Speed:

    Raid 0: 400/1000/5000

    Raid 1: 100/100/100

    Raid 5: 100/250/1250

    Raid 6: 67/166/833

    Raid 1+0: 200/500/2500

    Raid 0+1: 200/500/2500

    So I would say that the fault tolerance on 1 may be Excellent, and 1+0 may be good (and 0+1 worse than that), but depending on the number of disks, Raid 6/DP may no longer be much better than fair.

    On read performance, 1+0 will be no better than 1, in fact they should all be pretty equal (given the same number of disks) except for 5/6 which will be slightly worse.

    On write performance I would disagree that 1 is "good", the "write penalty" is only 2 in the case of a 2 disk set. So your write performance is the same as a single disk. Once you go above 4 disks (with raid 5) or 6 disks (in a raid 6 array), you will start to see better write performance than with a raid 1 array.

  • I agree short-stroking (using only a portion of the fastest area of the disk, leaving the rest entirely unallocated) can significantly improve spinning disk performance, because it reduces the normal (i.e. not calibrating) maximum head travel, which in turn reduces maximum seek times, which can reduce average seek times. This can cause lower RPM disks to perform as if they were high RPM.

    I'm also disappointed that average seek was used throughout; even without using actual short-stroking, if you start with fresh drives dedicated to SQL data and/or log files, write properly sized files once (without growing them), it's very likely that all of the disk used will be at the "beginning" (outer edge), and will thus act as if short-stroked, with much lower average seek times.

    Even without that, operations where the portions of the data files are close together can have the same effect.

    Likewise, the article ignored sequential rates, which do matter - when you have a close enough approximation of a large scan of a large clustered or nonclustered index without fragmentation, in a filegroup whose files aren't (as a whole as well as individually) fragmented, on a contiguous allocation of dedicated spindles, which isn't in the buffer already, then sequential performance can become very important; this can particularly affect index maintenance and very large queries.

    Note that a previous poster was incorrect regarding SSD's; RAID level can matter, and if RAID level doesn't matter (perhaps your controller or firmware or throughput path is the bottleneck), then a RAID level that provides more storage space at the same performance and a sufficient level of fault tolerance is clearly best.

    The previous poster talking about a SAN with a "massive" cache may have a setup I haven't seen, but I'm hard pressed to find SAN's with more than 64GB of cache*, and even that's pretty rare; our SAN has about 16GB of cache (shared with all SAN clients), while our big SQL servers have over 100GB of buffer space.

    For anyone that wants some raw data, here's a few 6 disk 3Gbps SATA local SSD setups on a modern controller in comparison to a few 10 disk 15k 4Gbps spindle SAN setups at various IOs outstanding - if you look closely, you'll see some odd effects. Every individual test was run for 1200 seconds to reduce cache effects.


    Rnd8Read2119349360103x2 Stripe 128KB

    Rnd8Read2118719360103x2 Stripe 64KB

    Rnd8Read211715926051x5+1 Stripe 64KB

    Rnd8Read211646916051x5+1 stripe 64KB

    Rnd8Read211630916051x5+1 Stripe 256KB

    Rnd8Read211702916051x5+1 Stripe 128KB

    Rnd8Read211649916001x6 Stripe 64KB

    Rnd8Read282516460103x2 Stripe 256KB





    Rnd8Read8372022916051x5+1 Stripe 128KB

    Rnd8Read8371322906051x5+1 Stripe 64KB

    Rnd8Read8368552886051x5+1 stripe 64KB

    Rnd8Read8367402876001x6 Stripe 64KB

    Rnd8Read8365782866051x5+1 Stripe 256KB

    Rnd8Read83631528460103x2 Stripe 128KB

    Rnd8Read83625728360103x2 Stripe 256KB

    Rnd8Read83610428260103x2 Stripe 64KB





    Rnd8Read16598044676051x5+1 Stripe 128KB

    Rnd8Read16593724646051x5+1 Stripe 64KB

    Rnd8Read16588304606051x5+1 stripe 64KB

    Rnd8Read16588654606001x6 Stripe 64KB

    Rnd8Read16585514576051x5+1 Stripe 256KB

    Rnd8Read165741344960103x2 Stripe 256KB

    Rnd8Read165728244860103x2 Stripe 128KB

    Rnd8Read165678444460103x2 Stripe 64KB





    Rnd8Read32858186706051x5+1 Stripe 128KB

    Rnd8Read32846566616051x5+1 Stripe 256KB

    Rnd8Read32820396416001x6 Stripe 64KB

    Rnd8Read32819536406051x5+1 stripe 64KB

    Rnd8Read32813016356051x5+1 Stripe 64KB

    Rnd8Read328104963360103x2 Stripe 256KB

    Rnd8Read328051662960103x2 Stripe 128KB

    Rnd8Read327670459960103x2 Stripe 64KB

    Rnd8Write2178671406001x6 Stripe 64KB

    Rnd8Write21579012360103x2 Stripe 64KB

    Rnd8Write21573312360103x2 Stripe 128KB

    Rnd8Write21535912060103x2 Stripe 256KB

    Rnd8Write28445666051x5+1 stripe 64KB

    Rnd8Write27623606051x5+1 Stripe 64KB

    Rnd8Write27120566051x5+1 Stripe 128KB

    Rnd8Write26879546051x5+1 Stripe 256KB





    Rnd8Write8191431506001x6 Stripe 64KB

    Rnd8Write81705913360103x2 Stripe 128KB

    Rnd8Write81701813360103x2 Stripe 64KB

    Rnd8Write81647812960103x2 Stripe 256KB

    Rnd8Write88987706051x5+1 Stripe 128KB

    Rnd8Write88763686051x5+1 Stripe 256KB

    Rnd8Write88672686051x5+1 Stripe 64KB

    Rnd8Write88641686051x5+1 stripe 64KB





    Rnd8Write16192681516001x6 Stripe 64KB

    Rnd8Write161672813160103x2 Stripe 128KB

    Rnd8Write161655112960103x2 Stripe 64KB

    Rnd8Write161635212860103x2 Stripe 256KB

    Rnd8Write168815696051x5+1 Stripe 128KB

    Rnd8Write168608676051x5+1 Stripe 256KB

    Rnd8Write168519676051x5+1 Stripe 64KB

    Rnd8Write168478666051x5+1 stripe 64KB





    Rnd8Write32188081476001x6 Stripe 64KB

    Rnd8Write321634412860103x2 Stripe 128KB

    Rnd8Write321618112660103x2 Stripe 64KB

    Rnd8Write321596612560103x2 Stripe 256KB

    Rnd8Write328832696051x5+1 Stripe 128KB

    Rnd8Write328624676051x5+1 Stripe 256KB

    Rnd8Write328559676051x5+1 Stripe 64KB

    Rnd8Write328510666051x5+1 stripe 64KB

    Rnd64Read2695743560103x2 Stripe 128KB

    Rnd64Read2691643260103x2 Stripe 64KB

    Rnd64Read2691043260103x2 Stripe 256KB

    Rnd64Read268584296051x5+1 Stripe 128KB

    Rnd64Read268334276051x5+1 Stripe 64KB

    Rnd64Read268064256051x5+1 stripe 64KB

    Rnd64Read267904246051x5+1 Stripe 256KB

    Rnd64Read267624236001x6 Stripe 64KB





    Rnd64Read81810111316051x5+1 Stripe 128KB

    Rnd64Read81807211306051x5+1 Stripe 64KB

    Rnd64Read818028112760103x2 Stripe 256KB

    Rnd64Read818027112760103x2 Stripe 64KB

    Rnd64Read81802911276051x5+1 stripe 64KB

    Rnd64Read818009112660103x2 Stripe 128KB

    Rnd64Read81800911266001x6 Stripe 64KB

    Rnd64Read81798511246051x5+1 Stripe 256KB





    Rnd64Read1621602135060103x2 Stripe 256KB

    Rnd64Read1621576134960103x2 Stripe 128KB

    Rnd64Read1621590134960103x2 Stripe 64KB

    Rnd64Read162155713476051x5+1 Stripe 128KB

    Rnd64Read162152113456051x5+1 Stripe 256KB

    Rnd64Read162149413436001x6 Stripe 64KB

    Rnd64Read162147113426051x5+1 Stripe 64KB

    Rnd64Read162146713426051x5+1 stripe 64KB





    Rnd64Read3222387139960103x2 Stripe 256KB

    Rnd64Read3222382139960103x2 Stripe 64KB

    Rnd64Read3222248139060103x2 Stripe 128KB

    Rnd64Read322174213596001x6 Stripe 64KB

    Rnd64Read322170013566051x5+1 Stripe 64KB

    Rnd64Read322167613556051x5+1 stripe 64KB

    Rnd64Read322168213556051x5+1 Stripe 256KB

    Rnd64Read322165913546051x5+1 Stripe 128KB

    Rnd64Write261913876001x6 Stripe 64KB

    Rnd64Write2338221160103x2 Stripe 64KB

    Rnd64Write2336521060103x2 Stripe 256KB

    Rnd64Write2335020960103x2 Stripe 128KB

    Rnd64Write220171266051x5+1 Stripe 64KB

    Rnd64Write219741236051x5+1 stripe 64KB

    Rnd64Write21551976051x5+1 Stripe 256KB

    Rnd64Write21537966051x5+1 Stripe 128KB





    Rnd64Write862653926001x6 Stripe 64KB

    Rnd64Write8339221260103x2 Stripe 256KB

    Rnd64Write8337021160103x2 Stripe 64KB

    Rnd64Write8335921060103x2 Stripe 128KB

    Rnd64Write820661296051x5+1 Stripe 256KB

    Rnd64Write820521286051x5+1 Stripe 128KB

    Rnd64Write820211266051x5+1 Stripe 64KB

    Rnd64Write819721236051x5+1 stripe 64KB





    Rnd64Write1662353906001x6 Stripe 64KB

    Rnd64Write16339821260103x2 Stripe 256KB

    Rnd64Write16338721260103x2 Stripe 64KB

    Rnd64Write16335020960103x2 Stripe 128KB

    Rnd64Write1620631296051x5+1 Stripe 128KB

    Rnd64Write1620621296051x5+1 Stripe 256KB

    Rnd64Write1620271276051x5+1 Stripe 64KB

    Rnd64Write1619751236051x5+1 stripe 64KB





    Rnd64Write3261863876001x6 Stripe 64KB

    Rnd64Write32338121160103x2 Stripe 256KB

    Rnd64Write32336921160103x2 Stripe 64KB

    Rnd64Write32335421060103x2 Stripe 128KB

    Rnd64Write3220641296051x5+1 Stripe 256KB

    Rnd64Write3220551286051x5+1 Stripe 128KB

    Rnd64Write3220061256051x5+1 Stripe 64KB

    Rnd64Write3219831246051x5+1 stripe 64KB

    Seq8Read2635374966051x5+1 Stripe 128KB

    Seq8Read2625114886051x5+1 Stripe 256KB

    Seq8Read2551424316051x5+1 Stripe 64KB

    Seq8Read2538764216051x5+1 stripe 64KB

    Seq8Read25369641960103x2 Stripe 256KB

    Seq8Read2519474066001x6 Stripe 64KB

    Seq8Read24640336360103x2 Stripe 128KB

    Seq8Read23827229960103x2 Stripe 64KB





    Seq8Read8836236536051x5+1 Stripe 256KB

    Seq8Read8836166536051x5+1 Stripe 128KB

    Seq8Read87591459360103x2 Stripe 256KB

    Seq8Read8737405766051x5+1 Stripe 64KB

    Seq8Read8698325466051x5+1 stripe 64KB

    Seq8Read8690005396001x6 Stripe 64KB

    Seq8Read86537551160103x2 Stripe 128KB

    Seq8Read85309641560103x2 Stripe 64KB





    Seq8Read16964307536051x5+1 Stripe 256KB

    Seq8Read16907907096051x5+1 Stripe 128KB

    Seq8Read168502966460103x2 Stripe 256KB

    Seq8Read16783336126001x6 Stripe 64KB

    Seq8Read167584059260103x2 Stripe 128KB

    Seq8Read166763852860103x2 Stripe 64KB

    Seq8Read16493283856051x5+1 Stripe 64KB

    Seq8Read16488213816051x5+1 stripe 64KB





    Seq8Read3210007078260103x2 Stripe 64KB

    Seq8Read32936887326051x5+1 Stripe 256KB

    Seq8Read328719768160103x2 Stripe 256KB

    Seq8Read328496566460103x2 Stripe 128KB

    Seq8Read32705975526001x6 Stripe 64KB

    Seq8Read32698345466051x5+1 Stripe 128KB

    Seq8Read32686315366051x5+1 stripe 64KB

    Seq8Read32684005346051x5+1 Stripe 64KB

    Seq8Write2589924616051x5+1 Stripe 256KB

    Seq8Write2577964526051x5+1 Stripe 128KB

    Seq8Write2531624156051x5+1 Stripe 64KB

    Seq8Write2512714016051x5+1 stripe 64KB

    Seq8Write2498763906001x6 Stripe 64KB

    Seq8Write24794337560103x2 Stripe 128KB

    Seq8Write24456834860103x2 Stripe 256KB

    Seq8Write24189532760103x2 Stripe 64KB





    Seq8Write8868676796051x5+1 Stripe 256KB

    Seq8Write8815346376051x5+1 Stripe 128KB

    Seq8Write86894853960103x2 Stripe 256KB

    Seq8Write8683525346051x5+1 Stripe 64KB

    Seq8Write8681975336001x6 Stripe 64KB

    Seq8Write8668225226051x5+1 stripe 64KB

    Seq8Write86531851060103x2 Stripe 128KB

    Seq8Write85426242460103x2 Stripe 64KB





    Seq8Write16973797616051x5+1 Stripe 256KB

    Seq8Write16886416936051x5+1 Stripe 128KB

    Seq8Write16777606086001x6 Stripe 64KB

    Seq8Write16739645786051x5+1 stripe 64KB

    Seq8Write16739815786051x5+1 Stripe 64KB

    Seq8Write167005754760103x2 Stripe 256KB

    Seq8Write166992954660103x2 Stripe 128KB

    Seq8Write166069447460103x2 Stripe 64KB





    Seq8Write32985397706051x5+1 Stripe 256KB

    Seq8Write32882836906051x5+1 Stripe 128KB

    Seq8Write32791046186001x6 Stripe 64KB

    Seq8Write32777326076051x5+1 Stripe 64KB

    Seq8Write32747705846051x5+1 stripe 64KB

    Seq8Write327059155160103x2 Stripe 128KB

    Seq8Write327046355060103x2 Stripe 256KB

    Seq8Write326321749460103x2 Stripe 64KB

    Seq64Read22376214856051x5+1 Stripe 128KB

    Seq64Read22233113966001x6 Stripe 64KB

    Seq64Read22176213606051x5+1 Stripe 256KB

    Seq64Read219893124360103x2 Stripe 256KB

    Seq64Read218004112560103x2 Stripe 128KB

    Seq64Read2915657260103x2 Stripe 64KB

    Seq64Read288725556051x5+1 stripe 64KB

    Seq64Read284535286051x5+1 Stripe 64KB





    Seq64Read822451140360103x2 Stripe 64KB

    Seq64Read821797136260103x2 Stripe 128KB

    Seq64Read821441134060103x2 Stripe 256KB

    Seq64Read82088013056051x5+1 Stripe 128KB

    Seq64Read82079813006051x5+1 Stripe 64KB

    Seq64Read82074812976051x5+1 stripe 64KB

    Seq64Read82073812966001x6 Stripe 64KB

    Seq64Read82014512596051x5+1 Stripe 256KB





    Seq64Read1623266145460103x2 Stripe 128KB

    Seq64Read1623250145360103x2 Stripe 64KB

    Seq64Read1623254145360103x2 Stripe 256KB

    Seq64Read162261614146051x5+1 stripe 64KB

    Seq64Read162262114146051x5+1 Stripe 256KB

    Seq64Read162257614116051x5+1 Stripe 128KB

    Seq64Read162257014116001x6 Stripe 64KB

    Seq64Read162253614096051x5+1 Stripe 64KB





    Seq64Read3222749142260103x2 Stripe 256KB

    Seq64Read3222733142160103x2 Stripe 64KB

    Seq64Read3222708141960103x2 Stripe 128KB

    Seq64Read322177113616051x5+1 Stripe 64KB

    Seq64Read322176813606051x5+1 stripe 64KB

    Seq64Read322176613606001x6 Stripe 64KB

    Seq64Read322176513606051x5+1 Stripe 128KB

    Seq64Read322175013596051x5+1 Stripe 256KB

    Seq64Write21718510746001x6 Stripe 64KB

    Seq64Write2147789246051x5+1 Stripe 128KB

    Seq64Write2146459156051x5+1 Stripe 64KB

    Seq64Write2145919126051x5+1 stripe 64KB

    Seq64Write2144569036051x5+1 Stripe 256KB

    Seq64Write2895656060103x2 Stripe 128KB

    Seq64Write2887355560103x2 Stripe 64KB

    Seq64Write2865154160103x2 Stripe 256KB





    Seq64Write81772711086001x6 Stripe 64KB

    Seq64Write8147999256051x5+1 Stripe 128KB

    Seq64Write8147539226051x5+1 Stripe 64KB

    Seq64Write8146719176051x5+1 stripe 64KB

    Seq64Write8144189016051x5+1 Stripe 256KB

    Seq64Write8888855660103x2 Stripe 64KB

    Seq64Write8890255660103x2 Stripe 128KB

    Seq64Write8864854160103x2 Stripe 256KB





    Seq64Write161769911066001x6 Stripe 64KB

    Seq64Write16147759236051x5+1 Stripe 128KB

    Seq64Write16147559226051x5+1 Stripe 64KB

    Seq64Write16147029196051x5+1 stripe 64KB

    Seq64Write16143989006051x5+1 Stripe 256KB

    Seq64Write16889455660103x2 Stripe 64KB

    Seq64Write16887455560103x2 Stripe 128KB

    Seq64Write16879455060103x2 Stripe 256KB





    Seq64Write321771311076001x6 Stripe 64KB

    Seq64Write32148069256051x5+1 Stripe 128KB

    Seq64Write32147479226051x5+1 Stripe 64KB

    Seq64Write32147109196051x5+1 stripe 64KB

    Seq64Write32144889056051x5+1 Stripe 256KB

    Seq64Write32891655760103x2 Stripe 128KB

    Seq64Write32887355560103x2 Stripe 64KB

    Seq64Write32876454860103x2 Stripe 256KB

    Seq256Read2543513596051x5+1 Stripe 256KB

    Seq256Read24991124860103x2 Stripe 64KB

    Seq256Read2484112106051x5+1 Stripe 64KB

    Seq256Read2478511966001x6 Stripe 64KB

    Seq256Read2476211906051x5+1 stripe 64KB

    Seq256Read24283107160103x2 Stripe 128KB

    Seq256Read24095102460103x2 Stripe 256KB

    Seq256Read238629666051x5+1 Stripe 128KB





    Seq256Read85824145660103x2 Stripe 64KB

    Seq256Read85768144260103x2 Stripe 128KB

    Seq256Read85757143960103x2 Stripe 256KB

    Seq256Read8560414016051x5+1 stripe 64KB

    Seq256Read8558713976001x6 Stripe 64KB

    Seq256Read8558213966051x5+1 Stripe 64KB

    Seq256Read8545413646051x5+1 Stripe 128KB

    Seq256Read8544913626051x5+1 Stripe 256KB





    Seq256Read165812145360103x2 Stripe 64KB

    Seq256Read165759144060103x2 Stripe 256KB

    Seq256Read165752143860103x2 Stripe 128KB

    Seq256Read16544813626051x5+1 Stripe 256KB

    Seq256Read16544613616051x5+1 Stripe 128KB

    Seq256Read16543713596051x5+1 Stripe 64KB

    Seq256Read16543413596051x5+1 stripe 64KB

    Seq256Read16543213586001x6 Stripe 64KB





    Seq256Read325765144160103x2 Stripe 64KB

    Seq256Read325755143960103x2 Stripe 128KB

    Seq256Read325632140860103x2 Stripe 256KB

    Seq256Read32544813626051x5+1 stripe 64KB

    Seq256Read32543813606051x5+1 Stripe 64KB

    Seq256Read32543913606001x6 Stripe 64KB

    Seq256Read32544113606051x5+1 Stripe 128KB

    Seq256Read32543413596051x5+1 Stripe 256KB

    Seq256Write2442911076001x6 Stripe 64KB

    Seq256Write237029256051x5+1 Stripe 128KB

    Seq256Write236879226051x5+1 Stripe 64KB

    Seq256Write236689176051x5+1 stripe 64KB

    Seq256Write236219056051x5+1 Stripe 256KB

    Seq256Write2223655960103x2 Stripe 128KB

    Seq256Write2222055560103x2 Stripe 64KB

    Seq256Write2220055060103x2 Stripe 256KB





    Seq256Write8442911076001x6 Stripe 64KB

    Seq256Write837059266051x5+1 Stripe 128KB

    Seq256Write836909236051x5+1 Stripe 64KB

    Seq256Write836759196051x5+1 stripe 64KB

    Seq256Write836089026051x5+1 Stripe 256KB

    Seq256Write8223455960103x2 Stripe 128KB

    Seq256Write8221755460103x2 Stripe 64KB

    Seq256Write8221055260103x2 Stripe 256KB





    Seq256Write16442311066001x6 Stripe 64KB

    Seq256Write1636979246051x5+1 Stripe 128KB

    Seq256Write1636889226051x5+1 Stripe 64KB

    Seq256Write1636749196051x5+1 stripe 64KB

    Seq256Write1636059016051x5+1 Stripe 256KB

    Seq256Write16223255860103x2 Stripe 128KB

    Seq256Write16221855460103x2 Stripe 64KB

    Seq256Write16221455360103x2 Stripe 256KB





    Seq256Write32442311066001x6 Stripe 64KB

    Seq256Write3237129286051x5+1 Stripe 128KB

    Seq256Write3236869226051x5+1 Stripe 64KB

    Seq256Write3236839216051x5+1 stripe 64KB

    Seq256Write3236309076051x5+1 Stripe 256KB

    Seq256Write32223755960103x2 Stripe 128KB

    Seq256Write32221655460103x2 Stripe 64KB

    Seq256Write32219154860103x2 Stripe 256KB

    Seq1024Read2148414846051x5+1 Stripe 128KB

    Seq1024Read2148114816051x5+1 Stripe 256KB

    Seq1024Read21468146860103x2 Stripe 128KB

    Seq1024Read21433143360103x2 Stripe 256KB

    Seq1024Read2140914096051x5+1 stripe 64KB

    Seq1024Read2138513856051x5+1 Stripe 64KB

    Seq1024Read2136713676001x6 Stripe 64KB

    Seq1024Read21323132360103x2 Stripe 64KB





    Seq1024Read81448144860103x2 Stripe 128KB

    Seq1024Read81444144460103x2 Stripe 256KB

    Seq1024Read8143314336051x5+1 Stripe 64KB

    Seq1024Read8143214326051x5+1 stripe 64KB

    Seq1024Read8139713976001x6 Stripe 64KB

    Seq1024Read81378137860103x2 Stripe 64KB

    Seq1024Read8137613766051x5+1 Stripe 128KB

    Seq1024Read8136013606051x5+1 Stripe 256KB





    Seq1024Read16143314336051x5+1 stripe 64KB

    Seq1024Read16143014306051x5+1 Stripe 64KB

    Seq1024Read16142414246001x6 Stripe 64KB

    Seq1024Read16139213926051x5+1 Stripe 128KB

    Seq1024Read161370137060103x2 Stripe 256KB

    Seq1024Read161367136760103x2 Stripe 128KB

    Seq1024Read16136113616051x5+1 Stripe 256KB

    Seq1024Read161342134260103x2 Stripe 64KB





    Seq1024Read32143114316001x6 Stripe 64KB

    Seq1024Read32142714276051x5+1 stripe 64KB

    Seq1024Read32141414146051x5+1 Stripe 64KB

    Seq1024Read32139313936051x5+1 Stripe 128KB

    Seq1024Read321372137260103x2 Stripe 256KB

    Seq1024Read321369136960103x2 Stripe 128KB

    Seq1024Read32135913596051x5+1 Stripe 256KB

    Seq1024Read321339133960103x2 Stripe 64KB

    Seq1024Write2110311036001x6 Stripe 64KB

    Seq1024Write29259256051x5+1 Stripe 128KB

    Seq1024Write29229226051x5+1 Stripe 64KB

    Seq1024Write29189186051x5+1 stripe 64KB

    Seq1024Write29189186051x5+1 Stripe 256KB

    Seq1024Write255955960103x2 Stripe 128KB

    Seq1024Write255455460103x2 Stripe 64KB

    Seq1024Write254454460103x2 Stripe 256KB





    Seq1024Write8110511056001x6 Stripe 64KB

    Seq1024Write89279276051x5+1 Stripe 128KB

    Seq1024Write89219216051x5+1 Stripe 64KB

    Seq1024Write89199196051x5+1 stripe 64KB

    Seq1024Write89139136051x5+1 Stripe 256KB

    Seq1024Write855855860103x2 Stripe 128KB

    Seq1024Write855555560103x2 Stripe 256KB

    Seq1024Write855455460103x2 Stripe 64KB





    Seq1024Write16110311036001x6 Stripe 64KB

    Seq1024Write169279276051x5+1 Stripe 128KB

    Seq1024Write169219216051x5+1 Stripe 64KB

    Seq1024Write169199196051x5+1 stripe 64KB

    Seq1024Write169039036051x5+1 Stripe 256KB

    Seq1024Write1655955960103x2 Stripe 128KB

    Seq1024Write1655455460103x2 Stripe 64KB

    Seq1024Write1655455460103x2 Stripe 256KB





    Seq1024Write32110711076001x6 Stripe 64KB

    Seq1024Write329289286051x5+1 Stripe 128KB

    Seq1024Write329219216051x5+1 Stripe 64KB

    Seq1024Write329199196051x5+1 stripe 64KB

    Seq1024Write329149146051x5+1 Stripe 256KB

    Seq1024Write3256056060103x2 Stripe 128KB

    Seq1024Write3255555560103x2 Stripe 64KB

    Seq1024Write3255255260103x2 Stripe 256KB

    *I mean real RAM, not SSD's used as cache.

    ETA: The two spindle RAID 50's results are because the faster ones are from 300GB disks, and the slower ones are from 146GB disks.

  • yazalpizar_ (5/1/2012)

    With the proliferation of SSD disk and its costs going down, some RAID levels with low write performances are no longer so low. Put an SSD in your life. I've done it, both home, laptop and work. Is the best money can buy right now in order to increase performance several degrees. We are planning to do it also on our local servers, first on the test server and then on the main one if evertyhing is ok.

    Make sure you have good backups and be careful here. When SSDs die, they die ,and you may or may not get good notice.

    A quick read:

  • Good article!

    I would caution against using the term LUN when referring to anything other than SAN or NAS storage. I've never heard that term used for locally attached disks.

    As mentioned in a previous post or two RAID 0+1 and RAID 1+0 are quite different and I would caution any SQL DBA against using RAID 0+1. The risk of data loss is too great and you increase the chance that you will get a 2:00AM call.

    I have several clustered instances utilizing a 123 TB 3Par SAN with 4 GB HBA dual port cards. In working with 3Par they claim to be consistently able to achieve > 53,000 IOPS. Of course this is a $3 million SAN but it gives an idea of what level of service can be provided. And that is with fiber channel drives. It gets even faster if you use SSD or a combination of fiber channel and SSD. For the most critical data it can be stored on the outer edge of the disks allowing for even faster seek times.

  • I don't see ever getting on board with having a production database running on a VM. Of course, things change but I cringe at the thought of this.


  • Steve Jones - SSC Editor (5/1/2012)

    yazalpizar_ (5/1/2012)

    With the proliferation of SSD disk and its costs going down, some RAID levels with low write performances are no longer so low. Put an SSD in your life. I've done it, both home, laptop and work. Is the best money can buy right now in order to increase performance several degrees. We are planning to do it also on our local servers, first on the test server and then on the main one if evertyhing is ok.

    Make sure you have good backups and be careful here. When SSDs die, they die ,and you may or may not get good notice.

    A quick read:

    Ew. That is unacceptable. I can see taking the risk with a personal machine but not a prod server. For my Thinkpad I met in the middle and bought a Hybrid drive. I have seen marked improvement in speeds but no SSD like speeds. However, I don't want to have to restore my laptop once a month so I'll stick with it.


  • Steve Jones - SSC Editor (5/1/2012)

    Make sure you have good backups and be careful here. When SSDs die, they die ,and you may or may not get good notice.

    A quick read:

    Agreed; SSD's can go down very quickly. Then again, a head crash on a spindle drive is pretty quick also. We put both spinning disks and SSD's in a mix of RAID 5, RAID 1, RAID 10, and RAID 50, depending on what's required.

    In all cases, you should have a level of fault tolerance that matches your organization's risk tolerance and budget. Having backups that you've actually restored somewhere to prove they really do work is also critical.

    FYI: I anticipate starting to use RAID 6 in the near future, as the two drives worth of parity should help protect against the _lovely_ case of 1 drive with data corruption followed by a different drive failing catastrophically, or the even more wonderful case of multiple drives having data corruption on differing sectors, and then any of them failing catastrophically.

  • As a warning: run consistency checks on your RAID sets; see if you've got one or more drives that has corrupt data that your RAID level is hiding from you. Better, more modern controllers do this automatically, old ones don't.

Viewing 15 posts - 16 through 30 (of 95 total)

You must be logged in to reply to this topic. Login to reply