Solid State Disks and SQL Server

  • Comments posted to this topic are about the item Solid State Disks and SQL Server

  • I'm surprised your article didn't mention anything about first testing whether your database IO performance is the bottleneck in your environment. We tested a Fusion IO card, and whilst had similar IO results, the speed improvement in the application was virtually zero.

    I'd suggest to anyone looking at SSD drives for databases to first do performance monitoring on their existing hardware and find out if you're adding unnecessary risk and memory wastage for something you don't require.

  • Nice article sir, thorough analysis at the I/O level, and well explained. 🙂

    Atlantis Interactive - SQL Server Tools
    My blog[/url]
    Why I wrote a sql query analyzer clone

  • The fastest way to improve database performance is to put the tempdb on a ram disk (free download at dataram.com). That works even if you have to reduce the ram available to the database itself. The tempdb is created from scratch every time you start up the SQL server (put the temp files in the root directory of the ram drive). I had a 4gb ram 2xopteron 2,2ghz hp dl385 g1 server with 6x72gb 15k u320 in a raid 10 running windows 2008 x64 an SQL server standard x64. I gave the SQL server 2,5gb ram and put the tempdb on a 512mb ramdrive leaving 1 gb for windows (+ pagefile). This performed very well for a 22gb database. in production I have more ram (25gb) but still take some from the SQL server and use it for a ramdrive. I strongly recommend using quest spotlight on SQL server for further optimization. I bought it before investing in new hardware but there is also a 30 day trial version.

  • I'm surprised you didn't set the sector size to 8KB. That's the page size for SQL Server and would have been a better choice than 4KB.

    On my production servers, I usually format those volumes not requiring compression (i.e. non-backup) that are used exclusively for SQL Server to 64KB. This keeps the MFT nice and small, ensures minimum fragmentation and fits well with SQL Server's extent size of 64KB: Allocating an extent maps directly to a disk sector.

  • We use fusionIO for tempdb too, beautiful! But why all the messing around with PCI expansion etc, if you need more space than you can squeeze into a box why not get a solid state SAN, then all you have to do is find enough space for HBA cards to give you enough bandwidth.

    Now I don't have one of these so can't recommend them or comment on their suitability for SQL server but they do look the biz and I'd love to give one a test drive.

    http://www.ramsan.com

  • A solid state SAN won't come close - maximum bandwidth per slot is 4Gbps (500MB/s). Plus you have the added latency from the fabric switches, etc. But the real SAN-SSD killer is cost - it's too prohibitive for a dedicated IO subsystem.

  • What do you mean per slot? Per port yes, but you can get dual ported HBAs which will double that, so 2 dual ported cards give you 2GB/s. And that is if you stay on fibre. What about infini-band that is 10 or 20 Gb/s isn't it?

    Well I've never poked around the price of these things but if you need 4TB of storage at that level I would expect the hoops you would have to go through to get enough fusion IO cards in single system would probably make it fairly cost effective and certianly simpler and easier to maintain.

    I guess you also get into the traditional arguments of SAN vs DAS storage efficiencies and sharability etc.

    I'd still like one for christmas 🙂

  • Look at the price of a regular SAN with regular disks. It still can't compete on price! I did a price/performance comparison of a single MSA70 (25 disks) to an EVA3000 - it was 8x cheaper per MB/s. And that's a really cheap EVA. As you move to bigger SANs, the cost per MB/s gets higher still - they're not designed for high performance service levels to an individual server.

  • great article. Thanks for sharing!

  • brad.c-774062 Since we do alot of large random IO there was no way a standard disk array could keep up with the SSD. We monitor every aspect of our IO system down to the database file level. I agree, without testing how do you know if your particular workload will see an improvement.

    Kim Lesden with a tempdb of 120GB a ramdisk wasn't an option.

    ian-707364 I'm not sure what you mean by setting the sector size to 8KB sector sizes on a disk are ether 512 bytes or 4KB on most modern disk systems. If you are talking about formatting the file system I always set them to 64KB block size.

    Ken Gaul as others have pointed out it is all about cost. a SSD SAN would be 10x the cost of a simmilarly sized Fusion-IO solution with breakout boxes.

    Thanks for the feedback!

    -wes

  • There is no other way to do this other than copy all the data to a new database that is created on the new 4KB sector size.

    What's a reference for this?

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • Jonathan,

    http://download.microsoft.com/download/4/7/a/47a548b9-249e-484c-abd7-29f31282b04d/SQLIOBasicsCh2.doc

    A SQL Server database can be restored or attached on a system that has a smaller sector size. To help guarantee data integrity, the larger sector size should be evenly divisible by the smaller sector size. For example a 4 KB source restored/attached to a 1 KB or 512-byte destination is an evenly divisible size. Restoring or attaching to a smaller sector size such as 1536 bytes does not fill the “evenly divisible” requirement and immediately requires sector rewrite operations to occur.

    SQL Server is not designed to dynamically upgrade the database to the larger sector sizes. SQL Server disallows restoring or attaching a database on a system with a larger sector size; it generates an error message and prevents the restore or attach operation. Enabling a database to function by using a smaller formatted sector size than the actual sector size violates the WAL protocol because the sector size variation guarantees the log records will not be correctly aligned with the physical sector size and log records will be rewritten.

    At the time of publication, some current subsystems report sector sizes larger than 512 bytes but most do not. The larger sector sizes involve newer technology and require API changes at the operating system level. Future changes in Microsoft® Windows and SQL Server will correctly support the larger sector sizes and allow for dynamic adjustment of sector sizes.

    You can go from 4k to 512 but not the other way. This also specifically effects data logs.

  • I think Brent Ozar tested the HP version of this on tempdb a few weeks back. HP rebrands these, but i think they only sell the blade server version under the HP brand. and they are very expensive.

    few years ago i experimented with RAID10 on a new BI server we built. i created a few RAID10 volumes. performance was good, but not so much better. and i constantly ran out of space and people who controlled the budget for this server constantly complained about buying new drives. I finally blew it away in favor of a few RAID5 and RAID1 volumes. If i ever do it again i would opt for multiple RAID1 volumes and split the data/index and log files.

    i would like to see a test comparing SSD's with a MSA 70 full of RAID1 volumes instead of RAID10.

    One nice thing about SSD is that they use a lot less power. i compared a consumer SSD drive to a consumer HD and it used something like 1/10 the power. Intel just started producing flash memory on their new 25nm process and the first products should come out in the 4th quarter. later this year and early next year we should see a huge drop in SSD prices. Toshiba is also experimenting with sub 25nm manufacturing, but their largest customer is Apple. Over the next few years I think there will be a huge ROI case to be made in favor of SSD on the electricity savings alone.

  • Thank you for such an informative post. I've been involved in initial benchmarking of more standard enterprise SATA SSD's on a local RAID controller, and while the results are impressive, they're very limited to random workloads, on a very limited test set, they were limited to primarily 8KB and 64KB random reads and 8KB random writes (64KB random writes were merely double an equivalent 600GB 15K RAID 1 pair).

    Question: Did you test degraded mode performance by, for instance, pulling one Fusion-IO card out (simulating total failure)?

Viewing 15 posts - 1 through 15 (of 63 total)

You must be logged in to reply to this topic. Login to reply