vSphere 5.5 Improvements Part 5 – vSphere Flash Read Cache (vFlash)

One of the semi-secret projects, originally known as vFlash, has finally come to the light of day in the form of vSphere Flash Read Cache. The new name certainly has a lot less of the “cool factor” that the vFlash moniker had, but I suppose the VMware product marketing team is paid by the word to name things. For brevity, I will just cheat and call it vFlash in this post. 🙂

First, let me set the tone on vFlash. I personally love the idea of server-side caching and have publicly stated as much, but I think vFlash is in a very early release and still needs a lot of work to be competitive in the ecosystem. It reeks of “good enough.” The general premise is that you install local flash devices (SSDs) into your vSphere hosts, which can then be used either by your virtual machines as a write-through cache or for your vSphere hosts as a memory swap cache.

Flash Read Cache Resource and Cluster Creation

So, what’s wrong with that? Answer: the process to implement and configure vFlash is ridiculously tedious!

Each vSphere host must be configured with capacity as a Flash Read Cache Resource using the VFFS file system (for those paying attention, that is not the same as VMFS because it’s not optimized for flash). Once a host is configured, you can then create a Flash Read Cache Cluster with the host resources provisioned in the previous step.

An example of the cluster creation is below.


At this point, you’ve only created the logical pool of flash that will act as the foundation for the next steps.

Host Swap Cache

If you’ve ever used the “Host Cache” feature in vSphere 5.1, you should be familiar with this host swap cache concept. When a vSphere host runs out of physical memory, and would normally swap virtual machine memory to disk, it can first consume space on an SSD as a stop-gap measure. You have the option of continuing down this path with vSphere 5.5 via the use of Virtual Flash Host Swap Cache, which consumes space on the Virtual Flash Cluster.

The configuration is again on a per-host basis and requires enabling the cache and setting a size variable.


Virtual Machine Flash Read Cache

To me, this is the meat of vFlash: the use of your Flash Read Cache by your virtual machines as a write-through cache. In order to consume your Flash Read Cache, you must edit the settings on every virtual machine you wish to accelerate and specify how much space to consume.


Wait, what? Really? Can’t the hypervisor just figure this out for me, or some other mechanism make a best guess with some sort of LFU (Least Frequently Used) algorithm? Am I just spoiled by my PernixData install?

Ugh. 🙁

vCenter Server Performance Statistics

I’ll conclude with a pretty cool feature – the fact that your vFlash Cache IOPS, latency, and throughput can all be seen via counters in the performance statistics within vCenter. You can also see even more detailed information via ESXCLI. This should be helpful if you’re trying to measure the overall effect of your flash investment on workloads.