• mkolek99 (6/10/2013)


    They released directCache (http://www.fusionio.com/data-sheets/directcache/) not so long ago. I see a lot of potential for that product given the overall effectiveness of transparent caching for accelerating a wide variety of applications. Keep in mind directCache will use FusionIO drives as a write-through cache, and as such, you'll want to avoid caching log files and possibly the tempdb system database (the data changes frequently). You can cache multiple volumes to a single caching device, and the most frequently accessed data will be stored in the cache to service read requests. All writes will always persist to the backend storage system.

    Note you cannot use a single device for both cache and local storage. For example, if you wanted to house the tempdb and a caching drive on the same host, you'll need more than one card.

    Most of our clients who have adopted FusionIO technology have used them to house the tempdb system database. You could always mirror two FusionIO drives inside a server for high availability and Windows will use both halves of the mirror to boost read performance. If you're using a Duo card, make sure to stripe the drives rather than span them. It is suggested that two cards be installed for HA whenever possible. Also, SQL Server 2012 supports local tempdb in failover cluster setups.

    The FusionIO card driver requires physical memory to operate. The amount of memory depends on the block sizes being written to the drive. Applications like SQL Server are predictable, as they typically use the same block size for their transactions. If your database is 1TB in size, my understanding is that the total memory footprint for the driver is roughly 1.5GB.

    It’s important to follow the setup directions. Down format the drive by a few GB to give the groomer process free space to operate. Otherwise, over time, after many writes to the device, you will begin to see performance degrade, which can be resolved by a format. If for instance, the 785GB drive is down formatted to 700GB, you should never have an issue with the groomer. If it's super write intense, you can also employ an SLC drive instead of MLC, but in most cases down formatting is more than enough.

    The quickest way to health check for performance issues is check for avg disk sec transfer times on the ioDrive, which should average under 1ms. It’s important to trend these numbers over time to ensure that the cards are performing as expected.

    Assuming the setup directions are followed, the technology is great. The amount of I/O removed from any shared storage allows that SAN or DAS to service all other requests faster. The performance and most importantly reliability of these cards often justifies the cost. Our clients who have used it as local storage devices have been satisfied, the response times can’t be beat as there’s minimal latency with the card being plugged directly into the board, not having to go over legacy SATA or SAS protocols or through any switch fabric.

    Thanks very much for the feedback mkolek99. I've read a bit about directCache and it looks good. You've given some comprehensive advice on using it which is great for me and other users.

    Thanks for the practical advice on setup configurations too.

    Much appreciated.

    Tim

    .