Running Sql Server Program files on RAID10 - is this necessary for performance ?

  • I know that you should run SQL data files on RAID10, which i have.

    I also have my log files and TempDB files on seperate RAID1 arrays.

    Is it necessary to run the 'C' drive program and o/s files on RAID10, or am i better to just keep adding hard drives to my RAID10?

    Should i use a seperate RAID1 for my binaries, and another seperate RAID1 for my FTI catalogs?

    Thanks

  • isuckatsql (2/27/2012)


    I know that you should run SQL data files on RAID10, which i have.

    I've run many large systems on RAID-5 with limited issues. Failure exposure is higher but the controllers have removed a lot of the speed issues with it. RAID 10 can still double read whereas Raid 5 can't, but that's another story, and will depend more on the cache then on the spindles.

    RAID arrays are a balance of price to performance. Always will be. TempDB is highly spastic and log files need high volume write speed, so these on RAID 1 may not be your best choice unless dedicated. For example, on a dedicated disk, the log head doesn't have to move to write, it's sitting right where it needs to be so you don't have hardware seek times and the like. However, as cache improves in SANs and the like, this can be variable as to how much it will affect you.

    OS/C stuff I usually merely Raid 1.

    Should i use a seperate RAID1 for my binaries, and another seperate RAID1 for my FTI catalogs?

    Binaries? Of what?

    FTI I assume you mean Full Text Indexes. Yes, I usually find it better to fully filegroup those indexes onto a separate spindle set. However, you might get more performance off a large RAID 10 doing double duty then a single spindle RAID 1. This will depend on drive volume.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • I usually have the OS and SQL Server binaries on a separate drive than the data files or log files.

    What I'm mostly looking at these days is:

    C: (internal drive) OS and SQL binaries on RAID-1

    D: (LUN on SAN) data files, usually RAID-5

    E: (Separate LUN on SAN, not the same spindles as D) log files, usually RAID-5

    F: (Another separate LUN) backup files (usually copied to tape or other removable storage, or shipped to a bulk storage facility, either way, go offsite soon after being written), RAID-5 or RAID-1 depending on capacity needs

    Something like that.

    Yeah, RAID-5 is slower for small, random writes, but with a modern SAN you probably won't be able to measure the difference.

    If it's not on a SAN, then RAID-10 is probably better.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Thanks for the feedback!

    Is running the o/s on SSD worth the money?

  • For (almost) pure reads, as O/S and SQL pgms are, RAID5 should be faster than RAID10, with the same number of spindles for each RAID.

    So, I'd say use RAID5 for that.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • But Raid5 takes a lot longer to rebuild after failure, no?

  • Not 100% sure, don't think it's a lot longer. Neither one is very fast really rebuilding 🙂 .

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • isuckatsql (2/28/2012)


    Thanks for the feedback!

    Is running the o/s on SSD worth the money?

    Woudn't bother using SSD for the O/S on a server, unless you haven't got any other plan than reboot that server if it crashes. The main advantage to SSD for the O/S is a faster boot time, which really won't matter much if it's failed over to a passive node on a cluster, or to a remote server, or anything like that.

    Database-wise, I'd be more inclined to put tempdb on SSD. But that won't get you much compared to a good SAN with a large cache. Less latency, but that's usually trivial.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared (2/29/2012)


    isuckatsql (2/28/2012)


    Thanks for the feedback!

    Is running the o/s on SSD worth the money?

    Woudn't bother using SSD for the O/S on a server, unless you haven't got any other plan than reboot that server if it crashes. The main advantage to SSD for the O/S is a faster boot time, which really won't matter much if it's failed over to a passive node on a cluster, or to a remote server, or anything like that.

    Database-wise, I'd be more inclined to put tempdb on SSD. But that won't get you much compared to a good SAN with a large cache. Less latency, but that's usually trivial.

    For a dedicated SQL Server that's not paging, the OS drive does very little; give it some high availability (RAID1, RAID10, RAID5, RAID6, etc.) and leave it alone.

    +1 on local SSD's with modern (<2 years old) controllers for tempdb. My personal opinion is that local SSD's can be a very reasonable exception to the general "separate your data and log files" rule, as their random performance, which still slower than their sequential performance, is still extremely high; if you can afford dedicated separate sets of them, even better.

    GSquared, I'd disagree that you won't get much compared to a good SAN; I've seen a set of 6 SSD's in RAID5 perform larger (random and sequential both) reads at ~1.3TB/s sustained throughput rates with high outstanding IO's, which is half again as much as 8Gbps fiber can handle in any circumstances... and if that's coming off of local disks, then it's in addition to the throughput your SAN can provide. Thus, on workloads using a combination of SAN and local SSD, the total thoughput is the SAN throughput + local SSD throughput, which may well be much higher than your SAN throughput by itself.

    Note that those SSD's did larger writes in the ~120MB/s range for random writes, and the ~900MB/s range for sequential (log) writes; those sequential rates are still in excess of what a single 8Gbps fiber connection will handle.

    In general, I'd put tempdb on local SSD's first, and then if space and failure modes permit, start putting your most active clustered and nonclustered indexes on new filegroups on the SSD's one at a time, particularly those that are used in large JOIN statements, to spread the load between the SSD's and the SAN, which then, if your SAN has the performance for it, your SAN to use that portion of the fiber throughput that has moved to the local SSD's for the remaining data, and give you a net gain.

    P.S. Use SQLIO to benchmark various types and sizes of reads and writes on your particular systems; you might be surprised at the differences between RAID10, RAID5, and RAID50 (particularly with SSD's, but also with modern SAS drives, they don't always follow the paths conventional wisdom predicts).

  • Nadrek (3/1/2012)


    GSquared (2/29/2012)


    isuckatsql (2/28/2012)


    Thanks for the feedback!

    Is running the o/s on SSD worth the money?

    Woudn't bother using SSD for the O/S on a server, unless you haven't got any other plan than reboot that server if it crashes. The main advantage to SSD for the O/S is a faster boot time, which really won't matter much if it's failed over to a passive node on a cluster, or to a remote server, or anything like that.

    Database-wise, I'd be more inclined to put tempdb on SSD. But that won't get you much compared to a good SAN with a large cache. Less latency, but that's usually trivial.

    For a dedicated SQL Server that's not paging, the OS drive does very little; give it some high availability (RAID1, RAID10, RAID5, RAID6, etc.) and leave it alone.

    +1 on local SSD's with modern (<2 years old) controllers for tempdb. My personal opinion is that local SSD's can be a very reasonable exception to the general "separate your data and log files" rule, as their random performance, which still slower than their sequential performance, is still extremely high; if you can afford dedicated separate sets of them, even better.

    GSquared, I'd disagree that you won't get much compared to a good SAN; I've seen a set of 6 SSD's in RAID5 perform larger (random and sequential both) reads at ~1.3TB/s sustained throughput rates with high outstanding IO's, which is half again as much as 8Gbps fiber can handle in any circumstances... and if that's coming off of local disks, then it's in addition to the throughput your SAN can provide. Thus, on workloads using a combination of SAN and local SSD, the total thoughput is the SAN throughput + local SSD throughput, which may well be much higher than your SAN throughput by itself.

    Note that those SSD's did larger writes in the ~120MB/s range for random writes, and the ~900MB/s range for sequential (log) writes; those sequential rates are still in excess of what a single 8Gbps fiber connection will handle.

    In general, I'd put tempdb on local SSD's first, and then if space and failure modes permit, start putting your most active clustered and nonclustered indexes on new filegroups on the SSD's one at a time, particularly those that are used in large JOIN statements, to spread the load between the SSD's and the SAN, which then, if your SAN has the performance for it, your SAN to use that portion of the fiber throughput that has moved to the local SSD's for the remaining data, and give you a net gain.

    P.S. Use SQLIO to benchmark various types and sizes of reads and writes on your particular systems; you might be surprised at the differences between RAID10, RAID5, and RAID50 (particularly with SSD's, but also with modern SAS drives, they don't always follow the paths conventional wisdom predicts).

    Good to know. I haven't had a chance to test that heavy-duty an SSD setup.

    I like the idea of heavy-use indexes on local SSDs. Not sure I'd put clustered on there, since that's the whole table, since you don't end up with the fault-tollerance of a good SAN that way, but I'm not sure I wouldn't either. I'd have to do a bit more research on that one.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared (3/2/2012)


    Good to know. I haven't had a chance to test that heavy-duty an SSD setup.

    I like the idea of heavy-use indexes on local SSDs. Not sure I'd put clustered on there, since that's the whole table, since you don't end up with the fault-tollerance of a good SAN that way, but I'm not sure I wouldn't either. I'd have to do a bit more research on that one.

    I would fully expect the local SSD's to be set up in RAID 1, RAID 5, RAID 6, RAID 10, or RAID 50, and thus provide the same level of local drive failure fault tolerance as any single SAN RAIDset. FusionIO, OCZ Z-Drive, and other PCIe SSD's do present a challenge here in that a very cursory review appears to so that they're limited to software RAID at this time.

    I'll certainly accept a hesitation to put clustered indexes on early in an organization's use of SSD's. A reasonable "step in carefully" path would be tempdb first, nonclustered indexes second, clustered indexes and heaps third, non-tempdb databases and log files last.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply