SQL Clone
SQLServerCentral is supported by Redgate
Log in  ::  Register  ::  Not logged in

Should you buy a SAN?

By Andy Warren,

Thinking of implementing a SAN? Not sure what a SAN is? How does it work with SQL? I won't say I have all the answers, but I do have some recent experience that you might find interesting. Keep in mind I don't claim to be a SAN expert! References to storage will be primarily Dell hardware because that is what I use - there are plenty of similar options by other vendors and I don't have the experience to say one is better than the other, we just happen to use Dell as our main vendor.

Probably a good place to start is with a brief (and simplified) overview of the types of storage:

  • Internal. One or more drives that actually are contained within the server. Usually this limits your drive count to five or six due to space constraints. Cheapest solution. Redundancy via RAID.
  • External - Simple. Consists of a container that holds one to fifteen (more or less) drives. Attaches to the server using SCSI, appears to the server as internal storage. A good example is the Dell Powervault 220S. Cost is reasonable. Redundancy via RAID.
  • External - Network Attached Storage (NAS). NAS is really a dumb server with storage built in. You plug it in, put it on the network, you've got more storage. Appears to the server as a network resource (you'd map a drive to it to use it). Another Dell example. SQL can use this only if you enable trace flag 1807. Cost is reasonable. Redundancy via RAID.
  • External - Storage Area Network (SAN). It's not quite a server, but it does have a built in OS. Storage appears as internal storage to the server, but the SAN can support multiple servers. Highly redundant, RAID plus redundant hardware (power supplies, fans, etc). Cost is high. Example is the Dell CX400 (really an EMC product). Greater complexity to install and manage.

Couple links giving you more details:





Pricing depends on who you buy from of course, and your pricing tier. Acquiring a terabyte of storage might cost you $20-25,000 at the low end for external or NAS, six figures for a SAN. In very broad terms, if you need to add space to your file server, NAS or just plain attached SCSI will work fine. If you need performance and high redundancy, then think about a SAN.

The idea of a SAN is to give you a "pool" of storage. Need more space, just add more drives. Not quite that simple, since drives need an enclosure (a box to hold to them, provide power, cooling, connectivity) that holds 10-15 drives. If the enclosure is full, you add another enclosure plus drives, but to the SAN, it just looks like more storage. To give you an example from my world, we went with the Dell CX400, starting with two enclosures, each holding 15 drives. It's expandable to four enclosures, and beyond that it's possible to do an in place upgrade to the higher capacity CX600. For the CX400, the max logical unit (LUN) is 16 drives, so if I have a LUN configured with 5 drives and I need more space (or just more spindles), I can add drives to the LUN, then depending on how the LUN is configured, I can expand the RAID set to stripe across all the drives.

Redundancy is a big deal in the SAN world. Just about every component is redundant and hot swappable. Done correctly, this means not just the SAN itself, but the switches used to connect to it and the host bus adapters on the server (PCI cards that handle the connection to the SAN). Cluster a couple servers, you have cables going every which way! Performance is also a big deal. Newer SAN's will use FibreChannel2 and have on board cache. With a large cache drive performance isn't as critical (in theory!) but you should still buy disks with the highest capacity and speed you can afford, and keep in mind that spindles still matter. 

SAN's also offer some other capabilities you don't usually see with other storage types. Different vendors may use different names, but the three main types are snapshot, clone, and mirror. Snapshots are a virtual copy of a drive that is completed in seconds. Clones are a true physical copy and the time it takes is based on the amount of data and the drive speed. Mirroring is synchronous replication at the disk level, typically done with a remote site as far as 60km away.

Configuring the SAN initially isn't simple. Ideally for even the simplest install you need two HBA's installed per server, two switches (Brocade and McData are common vendors - expect to pay around $1000 per port), additional software to handle failover if any link fails, switch zoning, plus setting up LUN's, setting up RAID, and don't forget just the time it takes to get it all into a rack. I was lucky enough to attend a four day class on the CX400 - it's not rocket science, but I don't think I would want to try to do it from just the book either. Once we returned home we still had a Dell/EMC tech do the actual install while we watched - worth every penny.

Once we got the SAN running, everything works the same. SQL sees the storage, it just works. The SAN itself should require very little admin once it's running (or so we hope).

So if you're thinking about buying more storage and you've decided a SAN is the right way to go, I've got a couple suggestions that might help:

  • Once you pick a vendor, go to the class before you purchase if you can. We didn't have a choice on vendor (I'm not complaining, Dell has been good to work with), but I hated buying something when I didn't fully understand it's capabilities and limitations. Definitely your network dude needs to go, try to get the DBA in as well. You don't need to master it all, but you do need to know enough to make recommendations about what goes where. Classes aren't cheap, neither is the SAN. Don't be penny wise.
  • Cost is always an issue. We had a choice of 36g/10k, 36g/15k, 73g/10k drives. I would have preferred the 73g drives but the price was prohibitive if I still wanted 30 drives. Went with 36g/15k. Maybe I could have gotten away with 15 of the 73g drives, but fewer spindles didn't feel right - a performance decrease wasn't something I could risk.
  • Plan for the space, power, and cooling you'll need. Even if you have sufficient cooling, you may need to rearrange equipment or add a fan to eliminate hot spots. Right now we're seeing a 15 degree difference from one end of our server room to the other. Put two racks containing six servers (two of them 8 proc boxes) plus 30 drives, you generate a LOT of heat. If you don't have a standalone temperate warning system, get one. You lose AC, the temperature is going to climb in a hurry. They should shutdown if they get too hot, who wants to risk it?

I'll be glad to answer questions about the Dell equipment or do a follow up going into more detail if there is enough interest.

Total article views: 8006 | Views in the last 30 days: 0
Related Articles

Redundancy in SQL Server

about redundancy in sql server


RAID (Redundant Array Independent Disk)

RAID (Redundant Array Independent Disk) As disk/storage plays a very important role in any applicati...


SQL Server Redundancy for SMBs

Building a highly fault tolerant and available SQL Server is expensive and difficult. This brings ab...


Avoiding redundant data storage

Hello, I already searched the web on this but did not find a satisfactionary answer. Because I am no...

sql server 7