SQLServerCentral Editorial

How will SSDs change SQL Server storage arrays?

,

Solid State Drives (SSDs) and other flash-based devices are physically more robust, quieter and faster than hard disk drives. Even with these advantages, they have spent years on the margins of transactional database systems, due to their prohibitive cost and dubious reliability.

The relatively poor growth of the SSD market in the past has not been due to lack of interest by  DBAs. They would like to see a reduction in the complexity of the storage arrays that underpin their SQL Server system, and which often makes it very difficult to track the true cause of a performance problem. They would also like more performance at lower cost and to be less reliant on SAN engineers. Glenn Berry's forthcoming SQL Server Hardware book, on which I'm currently working, hints that SSDs will, over the next few years, redefine the storage space.

RAM capacity has increased unremittingly over the years and its cost has decreased enough to allow us to be lavish in its use for SQL Server, to help minimize disk IO. CPU speed has increased to the point where many systems have substantial spare capacity that can often used to implement data compression and backup compression, to help reduce I/O pressure.

While disk capacity has improved greatly, disk speed has not, and this poses a great problem; most large, busy OLTP systems end up running into I/O bottlenecks. When a database workload is characterised by a large number of small, random I/O requests, then the main factor limiting how quickly that data is returned will be disk latency; the time it takes the head to physically move across the disk to find the data, and the time it takes for the disk to spin to read the data off of the disk.

This disk latency limitation lead to the proliferation of vast SAN (or DAS)-based storage arrays, allowing data to be striped across numerous disks, and leading to greatly enhanced IO throughput.  However, in trying to fix the latency problem, SANs have become costly, complex and fault-prone. These SANs are generally shared by many databases, which adds even more complexity and often results in a disappointing performance, for the cost.

Since SSDs are so fast, and have no latency, it seems that they are about to make vast storage arrays obsolete. An interesting, recently-updated whitepaper by James Morle, Sane SAN 2010, shows how SSDs have closed the gap between speed of access of data in RAM, and speed of access of data on disk. Unless your system needs to cope with in excess of 10,000 I/Os per second (500 MB/s in terms of bandwidth), your storage array is probably too complex anyway, and he suggests a simpler solution, based on SSD storage.

If there are any pioneering DBAs out there already using SSDs, or the astounding fusion-io drive, in a database system, or running high IO systems on relatively simple storage, then I'd love to hear your stories.

Cheers,

Tony.

Rate

3.67 (3)

You rated this post out of 5. Change rating

Share

Share

Rate

3.67 (3)

You rated this post out of 5. Change rating