Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

A SAN Primer Expand / Collapse
Author
Message
Posted Saturday, June 4, 2005 1:20 PM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Thursday, April 3, 2014 7:35 PM
Points: 401, Visits: 166
Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/hscott/asanprimer.asp


Post #187655
Posted Wednesday, June 22, 2005 7:30 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, January 30, 2014 10:08 PM
Points: 1,038, Visits: 444
Excellent article - one of those technologies that I've not had any experience with, have read about and wondered even more and hoped that if asked something about it during a meeting with a customer and their IT support people, I wouldn't sound silly...  Now I think I'll at least be able to have a meaningful discussion   Thank you!


Post #192904
Posted Wednesday, June 22, 2005 7:37 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Monday, June 9, 2014 6:02 AM
Points: 2,674, Visits: 697

Experiance has shown ( for me ) that where the sql servers are connected to a company SAN used for many other servers the performance is seriously lacking compared to DAS. Contention from shared disks has proved to be a problem although it took much digging before the physical setup of the disks was admitted.

Performance counters are very difficult as there tends to be a reliance upon the SAN provider to provide this data and they appear to be reluctant to ever accept there may be an issue.

I have monitored trends but the counter values themsleves are valueless when compared to their real values with DAS, this makes it difficult for example to calculate the i/o limits - I have a 5 spindle raid 5 ( shared ) I can record values into the thousands for the i/o and into the 100's for the queues - I know these values are meaningless and I should only take them as an indication but what would be a baseline and what does a disk queue value of 268 actually represent?



The GrumpyOldDBA
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Post #192910
Posted Wednesday, June 22, 2005 8:06 AM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Monday, October 21, 2013 10:48 AM
Points: 966, Visits: 933

Storage setups (in a SAN) are an art unto themselves. The usual way to get around contention for disk resources is to zone the SAN. This is akin to setting up a vlan (short story - private network for certain servers).

You can usually get better performance with direct attached storage, up to a point. Multi terrabyte datawarehouse databases, etc. may underperform in DAS, where on a properly setup SAN, they will shine. Plus, you can beat the redundancy of a properly setup SAN to overcome hardware failures.




Post #192953
Posted Wednesday, June 22, 2005 8:41 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Thursday, June 23, 2005 6:48 AM
Points: 2, Visits: 1
Hi all,

I enjoyed the article. The informaton here is accurate. I am in the stages of a SAN implementation. We have purchased and taken delivery of our SAN. We have 14 x 146GB (15K) FC drives and 14 x 250 (10K) SATA drives.

In planning our SQL SAN implementation we were advised that for SQL, it was better not to just create a huge LUN and divvy up the space into logical drives. We will be creating mirrored drives for both the database and log files. The SATA space is where we will do snapshot and D2D backups of our data. The SATA area will also provide some sandbox functionality for our IT and programming department.

We were told that mirrored sets were the way to go for performance. Any comments?
Post #192991
Posted Wednesday, June 22, 2005 8:49 AM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Monday, October 21, 2013 10:48 AM
Points: 966, Visits: 933

I got the best performance by creating multiple mirrorsets instead of one big RAID 10 set (Mirrored stripes). I created a filegroup made up of a file on each Windows drive (one per mirrorset).

I found that the Average disk queue length did best when Windows was throwing I/O at multiple Windows drives. The RAID array handled the I/O fine with either one big drive or multiple small ones, but Windows choked on a sinepl large drive (Avg. Disk Queue Lengths approaching 900 - the recommended target is 5). Looks like Windows I/O subsystem is the bottleneck (which any unix Admin will tell you).




Post #193000
Posted Wednesday, June 22, 2005 8:56 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Thursday, June 23, 2005 6:48 AM
Points: 2, Visits: 1
Chris,

That advice and experience matches what we were told by the vendor (IBM). They recommended two mirrorred sets for each SQL. One for Database, one for logs. Mirror for redundancy and fault tollerance.

We had also looked at the RAID 10 option and decided against it.

Thanks for the input.
Post #193006
Posted Wednesday, June 22, 2005 9:04 AM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Monday, October 21, 2013 10:48 AM
Points: 966, Visits: 933

I actually went further. Here's the originl setup:

1 Mirrorset - Transaction Logs (2 disks). Windows Drive I:

1 RAID10 - Data - MDF and NDF file (8 disks). Windows Drive J:

 

I was getting bad performance with Huge Average disk queue Lengths, so I split up the disks like this:

1 Mirrorset - Transaction Logs (2 disks). Windows Drive I:

1 Mirrorset - Data - One MDF and One NDF file  (2 disks). Windows Drive J:

1 Mirrorset - Data - One NDF file  (2 disks). Windows Drive K:

1 Mirrorset - Data - One NDF file  (2 disks). Windows Drive L:

1 Mirrorset - Data - One NDF file  (2 disks). Windows Drive M:

(All NDF files were in one filegroup, so all tables, indexes, etc. were 'striped' accross those four files). The other option is to place objects like tables on specific files (in their own filegroup) and their non-clustered indexes on other filegroups.

Windows performed much better, with Average disk queue lengths (for J, K, L, and M combined) around 20. This was oppossed to Avg. disk queue lengths up to 900 when the same physical disks were in a RAID 10 set (and one Windows drive letter, J).

 

BTW. stuff like the above is where it is good for DBAs to have a good understanding of SAN architecture, or to at least have a good storage/systems engineer to work with. My background was systems engineering before becomming a DBA, and I think it has served me well.




Post #193015
Posted Wednesday, June 22, 2005 11:05 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Wednesday, September 24, 2014 1:20 PM
Points: 1,276, Visits: 1,135
Excellent article, thanks for the advice!
Post #193076
Posted Wednesday, June 22, 2005 11:33 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, September 26, 2014 11:34 AM
Points: 37, Visits: 110
I have to agree that in my experience DAS is a better solution is some cases. I recently worked on a project where the size of the databases ranged up to 1.5TB. Most of the work being done was specialized datamining processes. When we were on a 4-way 8GB box with SAN drives we were having a subset of our standard queries run in 4+ hours when we were the only user using the SAN during the test perior.

I then pushed to create 2-way 4GB test box with a lot of DAS. We went with RAID 10 storage for Tables and Indexes and RAID 1 for log files. TempDB had its own RAID 10 Volume using multiple files to avoid hotspots. The Indexes and Tables had their own filegroups comprised of multiple files. After the data was loaded onto that box and all of the Tables and Indexes put on the appropriate filegroups we reran our test. Our test ran in less than 40min.

Both environments were retested and we veriefied that indexes and the like were the same and optimized. I came to the conclusion was if the overhead of administration of the DAS boxes was worth the performance increase, then DAS was the way to go. On the other hand if ease of backups and the ablility to make quick copies of production was necessary, then SAN had a lot to offer.
Post #193089
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse