Server side caching is one of those neat technology concepts that has captured my attention and imagination for the past few years. The idea of locally serving blocks of data has definitely been through a number of rounds of architecture, with the initial primary offering being a way to shift commonly read “hot” block reads closer to the server workload. VMware View’s CBRC (Content Based Read Cache) does this using the host’s RAM, but the concept of server side caching is the same – as read/write IO is brought into a workload, the local cache (be it PCIe flash, SSD, or RAM) holds on to the data and serves it locally if the blocks are requested a second time. This is a sort of “slam dunk” when we’re talking about virtual desktops with a common workload profile, but can still be quite impressive for disparate server workloads.
Note: All travel and incidentals were paid for by Gestalt IT to attend Storage Field Day 3. No other compensation was given.
Various Architectures – One Approach
I’ve seen many different ways to implement flash in a server, and so has the FlashSoft team. They’ve thrown out a lot of methods that both their engineers and customers have found a distaste for – including guest agents (bleh!) and gateway virtual appliances. FlashSoft leverages a kernel-level loadable module in VMware, which gives it the bonus advantage of being able to support both VMFS and NFS storage at a very granular extent level, and requires a flash device in the host for caching. The architecture supports all of the features an enterprise should expect – HA, vMotion, Snapshots, Clones, Storage vMotion, etc. – and can be accessed via a vSphere Client plugin.
Log Structured Cache
The process of garbage collection involves reading and rewriting data to the flash memory. This means that a new write from the host will first require a read of the whole block, a write of the parts of the block which still include valid data, and then a write of the new data. (source)
With FlashSoft, all writes are collected, sequentialized (in the case of random writes), and ultimately written to all pages within a block on an SSD. The product also claims to only consume about 140 MB of memory footprint to hold the metadata cache and less than 3% of the CPU (I assume one core).
FlashSoft Snapshots Anyone?
These probably aren’t the snapshots you’re thinking of, but are instead a mechanism that FlashSoft uses to get you back up and running in the event of a host server failure. Periodically, the software writes a “snapshot” of the metadata to the SSD. If you’ll recall from above, the metadata cache is stored in the host’s memory. So, if a host needs to recover from a failure, the FlashSoft software can reach out to a snapshot on the SSD, load the metadata from that point, and then find the missing writes to map into the metadata cache. This allows the system to get back into a hot, usable state within milliseconds.
You’ve now seen a good amount of data on the FlashSoft version 3.1 solution, but based on our offline discussion, more is definitely on the way! It’s my opinion that having server-side flash (PCIe or SSD depending on use case) will become as common as a NIC or CPU in future server design – it will just be a question of what type and how much.
The price point of the SanDisk FlashSoft solution is quite reasonable for the various ways to drive performance from a local cache, and offers engineers and architects yet another way to help shift that IO bottleneck away from storage. This may be especially crucial in a virtual desktop situation, but could also really help reduce stress on the SAN for traditional server environments in a variety of factors – such as the OS and any common applications.
Feel free to check Enrico Signoretti‘s post “When flash and cache are synonyms!” or Ilja Coolen‘s post “Flash Cache Acceleration with Pernixdata & SANDisk FlashSoft“. I’m also adding the video presentation below that reviewed some impressive performance testing using FlashSoft.
2,798 total views, 2 views today