SSD PCI Express vs. 15k drives

  • Greetings fellow DBAs --

    Just started a new DBA gig, and have unfortunately inherited a textbook version of how NOT to use SQL Server. Client is running 2008R2 Enterprise, with a db size of ~100GB. Due to massive performance problems they have already moved to SSD on PCI Express, and enabled Row Committed Snapshot Isolation, because of frequent deadlocks.

    The SQL environment is not run in-house, but instead is hosted. They are getting quotes on moving to a new hosting provider, and the specs I saw mentioned internal 15k magnetic drives. I warned them that this would likely be a mistake. Now they want to see a bench mark that would determine the potential performance hit if making this kind of switch. I realize that an apples-to-apples comparison like this might not have been done, so I'll phrase it another way.

    Is there a way to configure 15k drives either internal to the server or DAS that would perform in the ballpark of SSD PCI Express drives?

    Thanks very much in advance for any advice.

  • Wondering how to go about moving this post to the SQL 2008 general forum. Moderator(s)?

    Thanks--

    sqlnyc

  • I found a bechmark test here: http://www.8088.net/blog/index.php/2011/01/24/iops-comparison-solid-state-drive-ssd-15000-7200-5900-5600-rpm/?lang=en#chart

    I haven't done 15k vs SSD in a database environment, but on every machine I've ever tested on, SSD and platters just aren't comparable. Simply eliminating seek-time on random (non-sequential) I/O makes them incomparable, completely ignoring all other factors.

    Collateral advantages, like lower power-consumption, and collateral disadvantages (like lower repeatable-write durability, resulting in lower MTBF), probably don't matter as much in a hosted environment, since you won't be paying their electric/cooling costs, and won't pay directly for drive-replacement if part of a RAID array fails. And modern SSD MTBF on enterprise-grade hardware is pretty good. Much better than 10 years ago. So that may not even be a factor.

    But in terms of I/O, it's kind of like asking "which is faster, a Schwinn or a Porche 911?" Not the same breed of vehicle.

    Of course, code-refactoring is usually the right answer to slow databases. That and/or normalization. But if hardware is critical, hopefully the above link will help.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared - thanks so very much for the advice and the link. Exactly what I was looking for, but despite searching often, I couldn't find something like this.

    Much obliged --

    Best wishes,

    SQLNYC

  • Note also that replacing a failed PCIe SSD is going to require shutting the server down. Replacing hot-swap drives (SSD or spindle) does not.

    PCIe SSD's need to be in a software RAID or a SQL Server HA/DR setup to have serious failure tolerance; SAS and SATA drives generally use hardware RAID on modern servers.

  • I wouldn't recommend PCIe ssd because they're just a pain. My advice would be to run 6 x 512GB SSD in RAID-10. The thing to remember about ssd drives in an array is that there's no trim support. Trim is usually not that big of a deal in a database environment because most data doesn't get deleted. Another thing is that you would need to over-provision your array to say 300GB. I would only recommend Intel or Samsung ssd brands, don't buy any other brand. My favorite are the Samsung 830s.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply