best practice disk arrays

  • It is my understanding that an ideal disk drive arrangement for SQL2K is a three channel RAID controller with

    Channel A: holding an operating system partition and a program files partition incl SQL2K install

    Channel B: RAID 5 holding the transaction log(s)

    Channel C: RAID 0 for tempdb

    If this is the case then why can't I find a major server vendor with these options available in a single server device?

    Many thanks,

  • Most offer a controller for the back plane and a controller for the main storage area. You can then add another controller and attach a storage box. If you need this level of performance i.e you are reaching the limit of the channel then you will be using a lot of disks. That number of disks will allways be external.

    Further more the generally accepted config is

    RAID 1 OS and Binaries

    RAID 1 Log File

    RAID 10 Data

    The latter can contain tempdb or a new channel but again RAID 10 if you are doing writes.

    Simon Sabin

    Co-author of SQL Server 2000 XML Distilled

    http://www.amazon.co.uk/exec/obidos/ASIN/1904347088


    Simon Sabin
    SQL Server MVP

    http://sqlblogcasts.com/blogs/simons

  • Not sure I'd call that ideal, dont think you want to use RAID5 for logs that are primarily writes, makes more sense to go with RAID1. I've seen those recommendations for tempdb, unless you're REALLY using tempdb I dont think its worth it, better to add those drives to your data set.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • Thanks for your helpful replies. I have taken or am taking MS Course 2072 and 2073, via NewHorizons.com, and what I put in the original post is exactly what the courseware and instructors are suggesting. As I am new to this I wanted to get some more opinions.

  • I have always been told by vendors that it is better to have completely seperate RAID controllers, unless you only have one then using channels is better.

    Log and database files get best performance with great redundancy on RAID10 but they are costly. Logs and databases should be on seperate drives away from each other and systems files.

    The RAID0 with tempdb is good for performance and since a restart of the service rebuilds anyway it doesn't matter if lost. However, if the RAID looses a drive you will completely loose you server since the server does rely on the tempdb, this means impace on uptime.

    As for system files and the base SQL install (not data files), put them on a RAID 1 array as they have little changes you need to worry about and do not impact performance in a major way. Note: if you have apps on the box or processes that use the temp folder, change the temp folders path to a more suitable location for best performance.

    Now as for channeled cards for arrays, DELL has a PERC scsi card available in there servers which do support multiple channels, we have at least two or maybe four like this.

  • Okay I agree with everything here, but what configuration is best when using fiber attached SAN.

    I've been working (fighting) with one of the development groups that built a big DB server (Interesting how the DB gets asked last). I recommended at least 2 LUs on the SAN for the DB, plus the Quorum drive of course.

    They basically gave me one 36 gig drive for everything. I'm not too happy but am being totally ignored.

    Should I be concerned, how have others set things up, how do you recommend setup with SAN dasd.

    KlK, MCSE


    KlK

  • As a general rule of thumb (before getting too complex).....

    The more performance you want - The more spindles (physical drives) you need!

    Also, with only a single drive, you have NO fault tolerance!!!

    Edited by - paulhumphris on 11/18/2002 08:45:23 AM

  • kknudson, tell them that's fine so long as they don't mind reduced database performance and possibility of downtime. That will get their attention, and its true, too.

  • Interesting! What about SQL7, 3 x 18GB drives, 1 controller, RAID5. 2 Logical drives, C: (4GB) for OP Sys, D: (30GB) for SQL. Is this bad practice?

    Far away is close at hand in the images of elsewhere.
    Anon.

  • I would suggest that BAD practise is to have no fault tolerance. Once you have fault tolerance in your system any further improvements you can make are normally a trade-off between performance and budgets.

    You have fault tolerance by having your installation on a RAID5 array. Although not the fastest configuration for database performance, I would not call it BAD practise but not optimal.

  • Sorry, I assumed all SANs are RAID'd so that part of it is taken care of. Our SAN is all RAID 5 or 10.

    Although I still don't like the DB and the Logs on the "same" drive.

    But I was wondering if anyone has done any performance testing with a SAN and come up with configuration recommendations.

    KlK, MCSE


    KlK

  • A fiber attached SAN is going to perform at comparable performance to any locally attached storage.

    Most clustered solutions can only operate successfully by utilising SAN's, since the storage has to be shared - they work fine using SAN's.

    I wouldn't be too worried about data on a SAN's - check the write-back caching though, you need to ensure that the SAN will not be caching any unwritten transactions which could be lost with a power outage. Some SAN's do have battery backup to ensure that the cache is committed in the case of power loss. Check yours!!

  • The thing you must watch out for is that the fabric between the server and the SAN, especially in the case of fibre channel. I've got a fibre optic SAN on my development machine configured as RAID5. However, I only have 1 Gbit/s connection to it via a direct connection fibre over copper cable. In order to get the performance benefits, then you should have a fibre switch and fibre connections to your server. My experience is that this sort of storage works as well as SCSI RAID arrays. I have heard bad things though about using SQL with things like Snap servers where the storage is on the network...

    HTH,Simon

  • We're looking at SANS here, so it will be interesting to see how it performs.

    Our standard for the data center:

    1. RAID 1 - OS + SQL OS + pagefile

    2. RAID 1 - Logs

    3. RAID 5 - Data + backups.

    That's the basics and it works. RAID 5 is worse than RAID 1, but you have to test and make a tradeoff. The vast majority of systems I've worked with have way more reads than writes and RAID 5 works well. Test and check to see if you are disk Qing to find a bottleneck.

    If you make use of tmepdb (have to check. not all apps do), then separating out tempdb to another RAID drive is a good idea.

    I like the logical drive separation so the db growth doesn't impact the OS. Still have to watch it, but at least the OS won't crash.

    Steve Jones

    sjones@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/sjones

  • I like that.

    I have a question though, and I guess it could be related. I've worked with a lot EMC SAS Fiber systems using their own EMC fiber channel cards, and I guess I just assumed that the SAN Fiber arrays would be comparable. Is it just the network connection that slows em down? Why wouldn't they perform comparably, in a nutshell?

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply