Blog Post

Notes about Server SSDs

,

Note: This isn’t a recommendation or endorsement, just notes from some research I’ve done. Please contact your vendor before making purchasing decisions!

I’ve had an SSD (solid state drive) in my laptop for a while now. There are times when the performance is downright impressive. SSD’s have been making some inroads into the world of servers too, I just haven’t had much time (or need) to investigate, but I’ve definitely been curious. Kendal Van Dyke did some testing with drives from FusionIO and I’ll admit to hardware envy!

They are still expensive – more on that in a minute – but in the small to mid sized market, I think they are a very interesting option. Not uncommon for a server to reach the point where IO is the bottleneck even after tuning, and adding significant IO is expensive and complex for businesses that just want it to work and rarely have a data center. It’s interesting to look at the idea of recommending SSD instead of direct attached or SAN storage. The SSD is pretty close to plug and play, no significant new skills needed, no extra power needed, etc. Now I realize there are other places where these make sense too, but for small businesses being able to generate IO equivalent to a small SAN without hiring SAN expertise is compelling.

That’s the high level, what about the details? 

I set up a call with FusionIO to talk about pricing and implementation. I haven’t actually implemented one yet, this is just the research to help me understand the process and find out things I might not expect – don’t want to surprise the clients. It was a very useful call, and I’m sharing my notes here:

  • SSD’s are consumable, definitely different than the way we think of conventional HD’s. Fusion says their drives are rated for 2.5TB in writes per day to have a life of 10+ years. If you write more than that, the write life decreases. It was interesting to hear that when that happens, the drive still operates as read only.
  • Form factor on these cards are mostly half height, requiring an x4 or x8 PCI Express slot, running on 64 bit only
  • The driver can use up to 2g of memory (something to plan for!) to cache the lookup table for the card.
  • The drive isn’t bootable
  • The drive has built in redundancy and error checking, but you can only get RAID through software, that is, provided by Windows. The stated reason for this is that the RAID controller would be a bottleneck. Can’t say I’m excited about software RAID, but maybe that is overly cautious?
  • Placement of the card in the server can affect performance/life, it needs good cooling, and sometimes based on the server an extra fan will be recommended. Also some of the cards will require additional power beyond what it gets from the slot. Lesson – make sure you tell the vendor what server it will be installed in to find out the gotchas.
  • Fusion gives a 5 year warranty for failure, but not for expending the allotted writes. The basic warranty provides replacement (could be refurb) in 7 days, for an extra 20% you can get 24 hour replacement.
  • Don’t do the physical defrag of the drive, not needed. Still do index rebuilds (but from a write management perspective, a script to only rebuild the things that need rebuilding makes sense on these drives)
  • They write alerts/messages to the event log, to SNMP traps, and have a GUI so you can check the status/life.
  • Right now the drives list at $3200 for an 80G drive ($40/gb) and that pricing stays in effect all the way up through the current max sized drive of 640g. You may get better pricing through Dell/HP/other reseller
  • List IO is 100k for read only, 80k for a 75% read/25% write mix. Even if real world performance is a little less – impressive!

On a per GB cost it’s crazy expensive. On a per IO cost I think the math starts to make more sense, especially if you factor in the costs likely to hit a small business – more power, more rack space, training some to manage external storage, etc. It’s still a good chunk of cash though.

The best/easiest model would be to put everything – TempDB, logs, db’s – all on a single SSD, use the existing drives for backup. If cost is an issue, and it will be at these prices, then you have to do the work to figure out how to make the most of a smaller SSD. Maybe that’s moving TempDB, maybe Tempdb usage is low and you’d do better to put the database on it, or maybe just a filegroup containing the hot objects. When you get to a filegroup strategy moving the filegroup is easy enough, it’s deciding what to put in it that takes some real work. As I think about that, I wonder how often the part of a server/part of a db plan works with limited space, do you wind up using only a percentage of the available IO because you can’t get more on the drive?

I’m curious to see how SSD use evolves. Will it make us as DBA’s write conscious to a far greater degree? Short of using mirroring, I tend to use maintenance plans and just rebuild all, it’s easy and it’s easy for the next person to understand, but when the writes really count even off peak, do we change that strategy, or does it turn out to not matter because we’re still under the threshold for the projected life expectancy? Compression starts to be even more interesting too.

I guess the big point for me is the redundancy side. At a starting price of $3200 we’re probably not going to keep an extra drive on hand, and you can’t just run to CompUSA for a replacement. Plan A seems to be to switch everything back over to a regular drive and wait on the replacement, Plan B (required in most decent sized environments) is to go the software RAID route and double the drive cost.

Interesting stuff.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating