A SAN Primer

  • Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/hscott/asanprimer.asp

  • Excellent article - one of those technologies that I've not had any experience with, have read about and wondered even more and hoped that if asked something about it during a meeting with a customer and their IT support people, I wouldn't sound silly...  Now I think I'll at least be able to have a meaningful discussion   Thank you!

  • Experiance has shown ( for me ) that where the sql servers are connected to a company SAN used for many other servers the performance is seriously lacking compared to DAS. Contention from shared disks has proved to be a problem although it took much digging before the physical setup of the disks was admitted.

    Performance counters are very difficult as there tends to be a reliance upon the SAN provider to provide this data and they appear to be reluctant to ever accept there may be an issue.

    I have monitored trends but the counter values themsleves are valueless when compared to their real values with DAS, this makes it difficult for example to calculate the i/o limits - I have a 5 spindle raid 5 ( shared ) I can record values into the thousands for the i/o and into the 100's for the queues - I know these values are meaningless and I should only take them as an indication but what would be a baseline and what does a disk queue value of 268 actually represent?

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Storage setups (in a SAN) are an art unto themselves. The usual way to get around contention for disk resources is to zone the SAN. This is akin to setting up a vlan (short story - private network for certain servers).

    You can usually get better performance with direct attached storage, up to a point. Multi terrabyte datawarehouse databases, etc. may underperform in DAS, where on a properly setup SAN, they will shine. Plus, you can beat the redundancy of a properly setup SAN to overcome hardware failures.

  • Hi all,

    I enjoyed the article. The informaton here is accurate. I am in the stages of a SAN implementation. We have purchased and taken delivery of our SAN. We have 14 x 146GB (15K) FC drives and 14 x 250 (10K) SATA drives.

    In planning our SQL SAN implementation we were advised that for SQL, it was better not to just create a huge LUN and divvy up the space into logical drives. We will be creating mirrored drives for both the database and log files. The SATA space is where we will do snapshot and D2D backups of our data. The SATA area will also provide some sandbox functionality for our IT and programming department.

    We were told that mirrored sets were the way to go for performance. Any comments?

  • I got the best performance by creating multiple mirrorsets instead of one big RAID 10 set (Mirrored stripes). I created a filegroup made up of a file on each Windows drive (one per mirrorset).

    I found that the Average disk queue length did best when Windows was throwing I/O at multiple Windows drives. The RAID array handled the I/O fine with either one big drive or multiple small ones, but Windows choked on a sinepl large drive (Avg. Disk Queue Lengths approaching 900 - the recommended target is 5). Looks like Windows I/O subsystem is the bottleneck (which any unix Admin will tell you).

  • Chris,

    That advice and experience matches what we were told by the vendor (IBM). They recommended two mirrorred sets for each SQL. One for Database, one for logs. Mirror for redundancy and fault tollerance.

    We had also looked at the RAID 10 option and decided against it.

    Thanks for the input.

  • I actually went further. Here's the originl setup:

    1 Mirrorset - Transaction Logs (2 disks). Windows Drive I:

    1 RAID10 - Data - MDF and NDF file (8 disks). Windows Drive J:

     

    I was getting bad performance with Huge Average disk queue Lengths, so I split up the disks like this:

    1 Mirrorset - Transaction Logs (2 disks). Windows Drive I:

    1 Mirrorset - Data - One MDF and One NDF file  (2 disks). Windows Drive J:

    1 Mirrorset - Data - One NDF file  (2 disks). Windows Drive K:

    1 Mirrorset - Data - One NDF file  (2 disks). Windows Drive L:

    1 Mirrorset - Data - One NDF file  (2 disks). Windows Drive M:

    (All NDF files were in one filegroup, so all tables, indexes, etc. were 'striped' accross those four files). The other option is to place objects like tables on specific files (in their own filegroup) and their non-clustered indexes on other filegroups.

    Windows performed much better, with Average disk queue lengths (for J, K, L, and M combined) around 20. This was oppossed to Avg. disk queue lengths up to 900 when the same physical disks were in a RAID 10 set (and one Windows drive letter, J).

     

    BTW. stuff like the above is where it is good for DBAs to have a good understanding of SAN architecture, or to at least have a good storage/systems engineer to work with. My background was systems engineering before becomming a DBA, and I think it has served me well.

  • Excellent article, thanks for the advice!

  • I have to agree that in my experience DAS is a better solution is some cases. I recently worked on a project where the size of the databases ranged up to 1.5TB. Most of the work being done was specialized datamining processes. When we were on a 4-way 8GB box with SAN drives we were having a subset of our standard queries run in 4+ hours when we were the only user using the SAN during the test perior.

    I then pushed to create 2-way 4GB test box with a lot of DAS. We went with RAID 10 storage for Tables and Indexes and RAID 1 for log files. TempDB had its own RAID 10 Volume using multiple files to avoid hotspots. The Indexes and Tables had their own filegroups comprised of multiple files. After the data was loaded onto that box and all of the Tables and Indexes put on the appropriate filegroups we reran our test. Our test ran in less than 40min.

    Both environments were retested and we veriefied that indexes and the like were the same and optimized. I came to the conclusion was if the overhead of administration of the DAS boxes was worth the performance increase, then DAS was the way to go. On the other hand if ease of backups and the ablility to make quick copies of production was necessary, then SAN had a lot to offer.

  • There are a couple of points I think need to be clarified.

     

    In the article, two of the three types of storage are treated the same.

     

    SAN is Storage Area Network.

    NAS is Network Attached Storage. 

     

    They are not the same.

     

    A NAS uses the same network as regular Ethernet traffic.  Disk I/O is contending with Google searches, business applications, and downloads from sites we never, ever visit. A SAN uses an entirely separate dedicated network based on Fibre Channel to do all I/O. There is no non-storage I/O contention. 

     

    The Storage Network Industry Association is a very good source for SAN and NAS information (http://www.snia.org). Particularly helpful is their Technical Tutorials page at http://www.snia.org/education/tutorials/

     

    Do you need to “build the physical arrays” (i.e. mapping a set of drives to act together, sometimes known as ‘binding’) or not?

     

    It is entirely dependent on the SAN vendor. There are vendors that offer systems that do not require any construction of physical arrays, either by hand or behind the scenes, without complexity or difficulty troubleshooting. 

     

    In fact, being allowed to treat all of the drives in a SAN as one huge block of storage that can use different RAID levels for any given logical drive means that any logical drive seen by any host can maximize the number of drives that are used for any given I/O.

     

    This also lowers the technical requirement for huge, static caches (big bucks that could be spent on applications, not infrastructure) because every request will inherently be spread across the maximum possible number of drives & will minimize I/O latency. More spindles participating is very, very, very good.   The SPC-1 public benchmarks prove this point.  There is no correlation between cache size and SPC-1 IOPS; on the other hand, there is a strong correlation between number of spindles and SPC-1 IOPS.

     

    In the end, the whole point of SAN/NAS storage is to be able to flexibly and securely put the data on drives at the fastest reasonable speed for the lowest total cost of owning and managing the system from birth to death. If total cost (time and money) of a database over time was not a  concern, why would us DBAs have to think about using bigint vs. tinyint?  Every penny & every minute you do not spend on storage could be better spent on pizza & beer.

     

  • RAIB (Redundant Array of Innexpesnive Beers)!

  • When looking at SQL Server setup it might be worthy of a visit to microsoft for their sql setup, I've previously used 14 disk raid 10 arrays ( 2 ) for data, a 12 disk raid 10 for backups, a 6 disk raid 0 for tempdb and either seperate raid 1 sets for transaction logs or a 6 disk raid 10. Usually placed the log for tempd on a seperate drive. I suppose it's a matter of scale, a single disk can only support a number of i/o's - no matter how you look at it, for high performance oltp systems you need spindles.

    My real gripe about the provision of SAN's is the tendancy not to provide dedicated spindles for the sql server, by splitting the spindles the i/o are split and it is very easy to swamp the array and cause big problems for sql server.

    I agree a well set up san can perform - well I'm told it can - perhaps we should look at the throughput for backups, this is one of the easiest way to compare a set of disks - how quick does your backup complete ?

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • great information everyone....thanks!!!! 

  • In terms of performance, also keep in mind that not all SAN's are created equal.  A former employer switched from one brand to another (Xiotech to EMC) and immediately we noticed a 20% - 25% improvement in out IO throughput.  The increase was large enough to be very noticable to our users. 

    Of course the improvement was anything but free.  Both there and at my current employer we calculate the cost of the EMC storage to be $220 per GB!  Certainly not cheap.  But in a little more than six years combined usage neither company has ever experienced data loss due to system failure.  That, in my opinion, is the real reason to use a good SAN.

    /*****************

    If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek

    *****************/

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply