Solid State Disks and SQL Server

  • Get a quote for 5TB on the dsi3600 or a comparable TSM RAM-SAN. 8 Fusion-IO 640 Duo's get you 5.1TB of raw space. will retail at 124k. another 10k for chassis if you need them will get you 5~ GB/Sec throughput and 800k~ IO/sec. I doubt TSM or DSI can touch that for the price.

  • magarity kerns (4/8/2010)


    Has anyone here tried SQL Server on something like the ram drives from Texas Memory Systems? They aren't persistent when the machine is off like SSD's but they're supposed to be even faster.

    Hi magarity, YES I have..! read all about it on my blog! once you have worked with them... 😉

    www.henkvandervalk.com

  • Great article!

    Just some personal observations:

    1. On larger queries SQL Server read-ahead kicks in and it reads 512KB blocks at a time. In my observations disks are much better at long sequential reads like this. Yes they still dont get close to matching the performance of SSD's but their performance does improve markedly due to the fact that all modern disks use sector skewing allowing the head to move (during a sequential read) from one track to the next without incurring any significant latency (the sectors are offset such that the time it takes the head to move 1 track it finds the next sector directly under the head).

    2. One of the most critical values to OLTP throughput is the speed at which log writes can be completed as typically until they are reported as complete the transaction commit is not reported back to the database/application. The average size of log writes is very much determined by the site. If a site is running small "bank account" type updates then perhaps the average size of the log write could be as small as 512 bytes. Obviously larger transactions will write larger transaction log blocks (in 512 increments). Now the interesting point is that the max log write is is not 64KB as is written in many articles (but if you use ProcMon you will see 61,440 bytes being written (120X512bytes) as the max size. Some SSD's dont like 61,440 byte sized writes much (some SAN's dont like it much either actually).

    In testing therefore one should use something like FileMon or ProcMon to check the average size of log writes and use that in testing and then also test the speed of 61,440 byte writes as well as 512KB reads in addition to the "standard" sizes Wes has tested above.

    Brett Hawton

    Idera Product Architect

  • brett.hawton (4/9/2010)


    Great article!

    Just some personal observations:

    1. On larger queries SQL Server read-ahead kicks in and it reads 512KB blocks at a time. In my observations disks are much better at long sequential reads like this. Yes they still dont get close to matching the performance of SSD's but their performance does improve markedly due to the fact that all modern disks use sector skewing allowing the head to move (during a sequential read) from one track to the next without incurring any significant latency (the sectors are offset such that the time it takes the head to move 1 track it finds the next sector directly under the head).

    2. One of the most critical values to OLTP throughput is the speed at which log writes can be completed as typically until they are reported as complete the transaction commit is not reported back to the database/application. The average size of log writes is very much determined by the site. If a site is running small "bank account" type updates then perhaps the average size of the log write could be as small as 512 bytes. Obviously larger transactions will write larger transaction log blocks (in 512 increments). Now the interesting point is that the max log write is is not 64KB as is written in many articles (but if you use ProcMon you will see 61,440 bytes being written (120X512bytes) as the max size. Some SSD's dont like 61,440 byte sized writes much (some SAN's dont like it much either actually).

    In testing therefore one should use something like FileMon or ProcMon to check the average size of log writes and use that in testing and then also test the speed of 61,440 byte writes as well as 512KB reads in addition to the "standard" sizes Wes has tested above.

    Brett Hawton

    Idera Product Architect

    great reply Brett,

    If you are using Standard Edition like I am you don't get the full benefit of SQL Servers read-ahead feature.

    If you are running more than one database per set of disks you don't get the advantage of track-to-track reads ether. If you can configure your disk arrays, isolate IO patterns then you can take advantage of what disk drives do well, long sequential reads. At this point you are at the mercy of the RAID HBA you are using and if it has enough bandwidth to sustain what the disks are putting out. Basically,if your disk configuration isn't optimal swapping to SSD will fix most of it.

    On the log side, you are right about seeing the 61,440 write but it should still be a sector align write. The log file will be written in increments of sector as small as a single sector and as large as it needs to be. If you sector align your partition it shouldn't be an issue.

    Lastly, SQLIO won't do a 512 byte test the smallest it will go is 1k, which for log simulation is kind of a bummer.

    I actually use ProcMon quite a bit to see exactly what is happening underneath the covers. That's where you will see SQL Server using async overlapped IO, which Idera still doesn't do for all its IO operations, you could get a little bit of a bump by doing so, especially multi-threaded to a single backup file.

    Thanks!

    Wes

  • Hi Wes,

    Certainly on Idera's SQLSafe backup and restore product we switched to async IO a while back now.

    We also added (and patented) using a PID Controller (http://en.wikipedia.org/wiki/PID_controller) in our IntelliCompress technology whereby it uses a feedback loop to select precisely the correct compression ratio to be used several times a second during a backup/log offload. What this serves to do is always select precisely the right compression ratio based on server use, server speed and IO reads and writes. Typically on backup type products, one has to select a fixed compression ratio to be used but each of your servers runs at a different speed and has varying loads during the day (during each instant actually) so whatever fixed compression ratio you select is pretty much always wrong. Fixed compression ratio's are so pre-recession 🙂

    Feedback loops never make this mistake and have proven so superior in balancing out varying opposing factors (such as variable available processor power, effective IO read speed, effective IO write speed) that in a number of applications they have been outlawed due to their superior results (Formula 1 launch control being a particularly good example of its superior results).

    Brett Hawton

    Idera Product Architect

  • Wesley Brown (4/9/2010)


    If you are using Standard Edition like I am you don't get the full benefit of SQL Servers read-ahead feature.

    Well that is true to some extent - but SQL Server still tries to issue large I/O requests wherever it can, (for scatter/gather as well as sequential operations).

    For read-ahead, each large read request might be as small as 32 pages (256KB) or as large as 128 pages (1MB) on Standard Edition. Enterprise Edition raises the limit to 1,024 pages (8MB). Read-ahead is used extensively for data and index access - and even on some RID/KEY lookups and loop join operations - look for the WITH (UN)ORDERED PREFETCH iterator tag in the query plan.

    Many other operations in SQL Server generate large I/O too: log file initialization (512KB), bulk insert (multiples of 128KB), backup (1MB), restore (64KB), DBCC (varies)...

    I'm not trying to be smart, or contradict you here (far from it) but large I/Os are pretty common in SQL Server, and it would have been nice to see some test results at those sizes, just to give the spinning magnets a fair go 🙂

    Enjoyed the article very much thanks!

    Paul

  • Paul White NZ (4/11/2010)


    Wesley Brown (4/9/2010)


    If you are using Standard Edition like I am you don't get the full benefit of SQL Servers read-ahead feature.

    Well that is true to some extent - but SQL Server still tries to issue large I/O requests wherever it can, (for scatter/gather as well as sequential operations).

    For read-ahead, each large read request might be as small as 32 pages (256KB) or as large as 128 pages (1MB) on Standard Edition. Enterprise Edition raises the limit to 1,024 pages (8MB). Read-ahead is used extensively for data and index access - and even on some RID/KEY lookups and loop join operations - look for the WITH (UN)ORDERED PREFETCH iterator tag in the query plan.

    Many other operations in SQL Server generate large I/O too: log file initialization (512KB), bulk insert (multiples of 128KB), backup (1MB), restore (64KB), DBCC (varies)...

    I'm not trying to be smart, or contradict you here (far from it) but large I/Os are pretty common in SQL Server, and it would have been nice to see some test results at those sizes, just to give the spinning magnets a fair go 🙂

    Enjoyed the article very much thanks!

    Paul

    Hey Paul!

    No, you aren't being "smart" at all! I gave a little one liner and you expounded on that. I agree 100% if you can do sequential reads spinning disks still have a lot to offer. The root of the problem is most people ether don't or can't separate IO patterns on their disks arrays. Moving to SSD's negates some of that handicap.

    The other part of my short statement was to say not everyone runs Enterprise Edition of SQL Server, I should have been clearer.

    Thanks again!

  • brett.hawton (4/9/2010)


    Great article!

    Just some personal observations:

    1. On larger queries SQL Server read-ahead kicks in and it reads 512KB blocks at a time. In my observations disks are much better at long sequential reads like this. Yes they still dont get close to matching the performance of SSD's but their performance does improve markedly due to the fact that all modern disks use sector skewing allowing the head to move (during a sequential read) from one track to the next without incurring any significant latency (the sectors are offset such that the time it takes the head to move 1 track it finds the next sector directly under the head).

    Brett Hawton

    Idera Product Architect

    Your statement presumes that data is on disk in a sequential manner. I have never had a client that did anything to be rigorous about making sure their data was sequentially laid out on disk. There are many things that play into this where you can go wrong, everything from file growth increments through SQL Server settings (such as startup flag to allow 2MB allocation size) to data loading methodologies (think Fast Track here) to index maintenance. I would venture to say that less than 5% of all data on SQL Server is sequentially laid out on physical media. Some clients do actually have index maintenance set up (although usually incorrectly), and that is about it. SSDs completely eliminate the need for this...

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Blast from the past eh?

  • IMO this article should be titled: Fusion IO and SQL Server.

    It is not helpful in regards to the the biggest issues Solid State drives have.

    I would have liked more details on longevity of the drives, a real test of SLC vs MLC, TRIM support for RAID (sofware OR hardware).

    Editing the article didn't make it actual. Come on SQLServerCentral / Red Gate, why did this make it to today's mailing?! You guys are better than that.

  • I could see using SSD for log files rather than data storage. I would have also increased the RAM in the systems since SQL is RAM intensive. The combination should give the best performance as well as reducing the total cost.

    Most of the performance issues I have found are related to poor file distribution, log and data on same drive etc. While data writes don't suffer as much, the TLOG operations do and that really has a negative effect on the performance. Using a fast access drive for TLOGs should really help with the overall performance.

    Raymond Laubert
    Exceptional DBA of 2009 Finalist
    MCT, MCDBA, MCITP:SQL 2005 Admin,
    MCSE, OCP:10g

  • ryabel,

    When this article was written FusionIO was pretty much the main player in the enterprise space. I have covered several of the topics you are looking for on my blog as well SQLServerIO Fundamentals of Storage Systems[/url].

    I agree that the article could have a better title. Other than the title, was there anything else wrong with the article?

    -wes

  • Ray Laubert,

    If you have multiple log files on a single drive that effectively makes it random I/O and a good candidate for solid state. If, however you have dedicated spindles to a single log drive performance is quite good since it is all sequential I/O.

  • Wesley Brown (3/9/2012)


    ...

    I agree that the article could have a better title. Other than the title, was there anything else wrong with the article?

    -wes

    I would have to disagree with the statement "Using SQLIO, or any benchmark tool, to get the maximum performance numbers from your system isn't that helpful...". I find using an extensive SQLIO test set, on a variety of IO outstanding values, for a variety of IO block sizes to be very valuable. Note that I recommend 10-20 minute tests of each block size, and I'd also recommend using up as much RAM as possible on the target server

    First, it gives you an idea of what the maximum performance you can expect in each condition is, and how they relate to each other.

    Second, it lets you test various configurations (RAID levels, # of drives, stripe size settings, FusionIO options, etc.) and see how they affect each individual IO case. This gives you more information for tuning each particular system.

    Third, this technique highlights bugs, limitations, edge cases, and oddities in your particular setup. I once saw a system experience an unusually and inexplicably low write rate on certain cases, regardless of the spindle configuration or whether they were solid state or spinning platter drives. More "holistic" techniques would show consolidated numbers, and with those alone you would have a very difficult time figuring out that only one or two specific IO cases have abnormal results.

  • I also have 2 production SQL servers on HP DL385 G7's that run arrays of Intel 160GB drives in RAID 10. I'm using the built-in 410i controller, with 512MB BBWC, and they just fly.

    Even during maintenance, reporting time, etc, Disks are never more than 5% busy, and IO's complete in less than 1-2ms. I cannot bog them down.

    I bought consumer drives (but intel, due to good reputation with raid cards), and planned on replacing them as they failed, as the price was about 10% of FusionIO, but I haven't had a single failure of any kind yet.

    I've been running them hard in production for 2 years, and monitoring the writes to the drives with HP's management tools, and haven't had a single one blink yet.

    Sometimes it's a heck of a lot cheaper to use multiply redundant cheap drives than enterprise SSD's.

Viewing 15 posts - 46 through 60 (of 63 total)

You must be logged in to reply to this topic. Login to reply