Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

A SAN Primer Expand / Collapse
Author
Message
Posted Wednesday, June 22, 2005 12:00 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Monday, December 20, 2010 8:33 AM
Points: 9, Visits: 81

There are a couple of points I think need to be clarified.

 

In the article, two of the three types of storage are treated the same.

 

SAN is Storage Area Network.

NAS is Network Attached Storage. 

 

They are not the same.

 

A NAS uses the same network as regular Ethernet traffic.  Disk I/O is contending with Google searches, business applications, and downloads from sites we never, ever visit. A SAN uses an entirely separate dedicated network based on Fibre Channel to do all I/O. There is no non-storage I/O contention. 

 

The Storage Network Industry Association is a very good source for SAN and NAS information (www.snia.org). Particularly helpful is their Technical Tutorials page at http://www.snia.org/education/tutorials/

 

Do you need to “build the physical arrays” (i.e. mapping a set of drives to act together, sometimes known as ‘binding’) or not?

 

It is entirely dependent on the SAN vendor. There are vendors that offer systems that do not require any construction of physical arrays, either by hand or behind the scenes, without complexity or difficulty troubleshooting. 

 

In fact, being allowed to treat all of the drives in a SAN as one huge block of storage that can use different RAID levels for any given logical drive means that any logical drive seen by any host can maximize the number of drives that are used for any given I/O.

 

This also lowers the technical requirement for huge, static caches (big bucks that could be spent on applications, not infrastructure) because every request will inherently be spread across the maximum possible number of drives & will minimize I/O latency. More spindles participating is very, very, very good.   The SPC-1 public benchmarks prove this point.  There is no correlation between cache size and SPC-1 IOPS; on the other hand, there is a strong correlation between number of spindles and SPC-1 IOPS.

 

In the end, the whole point of SAN/NAS storage is to be able to flexibly and securely put the data on drives at the fastest reasonable speed for the lowest total cost of owning and managing the system from birth to death. If total cost (time and money) of a database over time was not a  concern, why would us DBAs have to think about using bigint vs. tinyint?  Every penny & every minute you do not spend on storage could be better spent on pizza & beer.

 

Post #193100
Posted Wednesday, June 22, 2005 12:08 PM


SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Monday, October 21, 2013 10:48 AM
Points: 966, Visits: 933
RAIB (Redundant Array of Innexpesnive Beers)!


Post #193104
Posted Wednesday, June 22, 2005 1:17 PM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Monday, June 9, 2014 6:02 AM
Points: 2,674, Visits: 697

When looking at SQL Server setup it might be worthy of a visit to microsoft for their sql setup, I've previously used 14 disk raid 10 arrays ( 2 ) for data, a 12 disk raid 10 for backups, a 6 disk raid 0 for tempdb and either seperate raid 1 sets for transaction logs or a 6 disk raid 10. Usually placed the log for tempd on a seperate drive. I suppose it's a matter of scale, a single disk can only support a number of i/o's - no matter how you look at it, for high performance oltp systems you need spindles.

My real gripe about the provision of SAN's is the tendancy not to provide dedicated spindles for the sql server, by splitting the spindles the i/o are split and it is very easy to swamp the array and cause big problems for sql server.

I agree a well set up san can perform - well I'm told it can - perhaps we should look at the throughput for backups, this is one of the easiest way to compare a set of disks - how quick does your backup complete ?



The GrumpyOldDBA
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Post #193156
Posted Wednesday, June 22, 2005 2:34 PM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Wednesday, January 25, 2012 8:14 AM
Points: 567, Visits: 512
great information everyone....thanks!!!! 
Post #193208
Posted Wednesday, June 22, 2005 3:51 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Wednesday, July 23, 2014 8:16 AM
Points: 1,035, Visits: 410

In terms of performance, also keep in mind that not all SAN's are created equal.  A former employer switched from one brand to another (Xiotech to EMC) and immediately we noticed a 20% - 25% improvement in out IO throughput.  The increase was large enough to be very noticable to our users. 

Of course the improvement was anything but free.  Both there and at my current employer we calculate the cost of the EMC storage to be $220 per GB!  Certainly not cheap.  But in a little more than six years combined usage neither company has ever experienced data loss due to system failure.  That, in my opinion, is the real reason to use a good SAN.




/*****************

If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek



*****************/
Post #193257
Posted Wednesday, June 22, 2005 3:52 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Wednesday, July 23, 2014 8:16 AM
Points: 1,035, Visits: 410

In terms of performance, also keep in mind that not all SAN's are created equal.  A former employer switched from one brand to another (Xiotech to EMC) and immediately we noticed a 20% - 25% improvement in out IO throughput.  The increase was large enough to be very noticable to our users. 

Of course the improvement was anything but free.  Both there and at my current employer we calculate the cost of the EMC storage to be $220 per GB!  Certainly not cheap.  But in a little more than six years combined usage neither company has ever experienced data loss due to system failure.  That, in my opinion, is the real reason to use a good SAN.




/*****************

If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek



*****************/
Post #193258
Posted Wednesday, June 22, 2005 3:54 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Wednesday, July 23, 2014 8:16 AM
Points: 1,035, Visits: 410

In terms of performance, also keep in mind that not all SAN's are created equal.  A former employer switched from one brand to another (Xiotech to EMC) and immediately we noticed a 20% - 25% improvement in out IO throughput.  The increase was large enough to be very noticable to our users. 

Of course the improvement was anything but free.  Both there and at my current employer we calculate the cost of the EMC storage to be $220 per GB!  Certainly not cheap.  But in a little more than six years combined usage neither company has ever experienced data loss due to system failure.  That, in my opinion, is the real reason to use a good SAN.




/*****************

If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek



*****************/
Post #193259
Posted Thursday, June 23, 2005 6:44 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Thursday, April 3, 2014 7:35 PM
Points: 401, Visits: 166

Many thanks to all for your posts.  Some thoughts to add (in no particular order):

I have written an additional article based on my experiences designing storage on our array.  It is currently being edited and will be published in the future on this site.

I did not mean to confuse NAS with SAN, but I am not certain that I did either.  I tried to highlight that the newer (since late 2004) CX series from Dell/EMC feature iSCSI which allows users to share ethernet and storage traffic on the same network.  I don't believe that these would be considered NASs.

I do not have a great deal of experience with different SAN vendors.  I did not highlight the brands I currently use in the article because it wasn't a product review and I did not want it to appear as such.  That said, we have installed here a Dell/EMC CX400 (all Fibre), a Dell/EMC CX500 (all Fibre) and an IBM DS4300 (all fibre).  We use McData switches with the Dell equipment and Brocade switches with the IBM.  All told, I think we have around 12 TB of raw storage.  In a previous job, I was privileged to use a Symmetrix SAN (EMC) with their TimeFinder SQL Integration Module (TSIM).  What a pleasure to "snap" an 80 GB full database backup in 3 seconds!

With respect to DAS, I have previous experience with an IBM SCSI (can't remember the model) and current experience (not altogether pleasant) with the Dell PowerVault 220S (1 configured in a cluster, 1 configured as a split bus).  It's not fair for me to compare the performance with our SAN because the DAS is in our test environment and we have made some deliberate compromises in order to save $$$.

As regards performance, I could not be more pleased with the Dell/EMC line.  Two nights ago we loaded a copy of a 279 GB Oracle database from a FireWire External Drive (Maxtor OneTouch II).  The copy process took about 68 minutes (and I had to remove one of the HBA's to fit the PCI FireWire card, so there was no load-balancing during the copy process).  I regularly back up a 200 GB database in about 45 minutes on a different production SQL Server.

Regards,

Hugh Scott




Post #193438
Posted Friday, June 24, 2005 6:33 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Friday, March 3, 2006 10:12 AM
Points: 153, Visits: 1

Great article!

Thanks




Post #193889
Posted Friday, June 6, 2008 8:03 AM
SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Wednesday, August 13, 2014 5:44 AM
Points: 1,563, Visits: 1,338
How often are you doing SAN snapshots (both full database and transaction log snapshots, right?) and how often are you backing up to tape, after the SAN snapshots occur? Thanks.

Chris
Post #512918
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse