RAID and Its impact on your SQL performance

  • The spinning never stops 🙂

    So when the seek part is done, it will have to complete half a rotation on average for the correct part of the disc to come under the head.

  • correct

    Gregory A Jackson MBA, CSM

  • palesius 61659 (5/29/2014)


    The spinning never stops 🙂

    So when the seek part is done, it will have to complete half a rotation on average for the correct part of the disc to come under the head.

    Can the disc described in the article, a 15kRPM disc with 1.83ms average rotational latency, possibly exist if that average latency is supposed to be for random (rather than serial) transfers? What proportion of transfers would have to be in serial or almost serial sequences to get the rotational latency down from half a rev (2.0ms, as it's 15kRPM) to 1.83ms?

    Another interesting question is how is the seek time measured? I would imagine that with a decent raid controller there is enough battery-backed storage to permit enough write buffering that sweep scheduling is usable, and then average seek time will go down as average transfer queue length grows. Sweep scheduling for discs was proposed by Deniston (I think that was his name) in about 1965 and he showed that it provided significant performance benefits; it's described in wikipedia as the LOOK Algorithm, with no attribution. CLook was also described (but not called that) by Deniston (?) as something that would alleviate the "unfairness" of the original algorthm, also the shortest seek first algorithm which he rejected because it can result in starvation processes requiring transfers to edge areas of the disc. Of course any of the more modern variants could be used in a RAID controller instead.

    Tom

  • The idea that the RAID5 (or RAID6) performance disadvantage disappears because of the buffering is a bit careless, as it doesn't consider recovery times when a disc goes down - the period during which a RAID5 is vulnerable because it has lost a disc is much greater than the same period for a RAID 10 array, because it tikes much longer to recover using a parity calculation that to do a straight copy. During that vulnerable period the performance deterioration is much greater than the performance deterioration from a RAID 10 array which is recovering a disc. So for large arrays, where disc failures are not particularly rare simply because the array is large, RAID five will give rare but long periods of very poor performance while a RAID 10 array designed to have the same data capacity using the same model disc will give less rare (about twice as frequent) periods of less degraded performance (the degradation is a small fraction of the deterioration during the RAID5s recovery time, and the raid 10 recovery time doesn't increase with RAID size while the RAID 5 degradation does) and the periods will be of much smaller duration (each recovery will take half the time or less of each RAID 5 recovery even for small arrays). This isn't quite as bad for RAID 6 or RAID DP, but is still pretty bad.

    The idea that a RAID 10 with the same storage volume as a RAID 5 and using the same disc model has worse write performance than the RAID5 is pure myth - historically it originated when a gentleman (who shall remain nameless -everyone makes the odd mistake) at a University noticed that because a RAID 10 with 6 discs had worse write performance than a RAID 10 with 6 discs and then confused himself into writing in one of the trade rags that a this meant that a RAID 10 had worse write performance than a RAID5 with the same capacity, forgetting temporarily that to get the same data capacity as a 6 disc RAID 5 the RAID 10 would need 10 discs - which is of course where the big RAID 10 cost penalty comes in - and then a bunch of people who couldn't do arithmetic read it, and ignored any susequent retraction or counterargument so that they "knew" for ever after that the mythical RAID 10 performance penalty really existed. I used to be amused regularly to see claims that measurements prove this mythical penalty exists, for example comparing a local RAID 5 using 15kRPM discs to a remote RAID 10 on a storage network using 10kRPM discs, or comparing a 9 100GB disc RAID 5 to an 8 200GB disc RAID 10, but I haven't seen one of those for a long time now.

    Tom

  • Nice and Excellent info....

  • this article was written a long time ago, and I dont remember the exact specs at the time, but in short....yes.

    LOTS of disks.

    GAJ

    Gregory A Jackson MBA, CSM

Viewing 6 posts - 91 through 95 (of 95 total)

You must be logged in to reply to this topic. Login to reply