Yup, you’ve heard correctly. PernixData’s FVP solution is now officially at version 1.0 and is generally available (GA). You might find that surprising, as the code was first released into the wild as a beta program a little over nine months ago. However, the team over at PernixData very wisely chose to leverage many excellent resources for feedback, roadmap influence, and bug testing. By resources, I give a nod to the virtualization community as a whole – such as VMUG members, vExperts, and those at the VMware Partner Exchange event – the very people who are already known to be quite passionate and energetic when it comes to slick tech that promises to alleviate so many pain points.
If you’re just now hearing about PernixData FVP, I invite you to take a glance over at my introductory article entitled “An Early Look at PernixData’s Server-Side Flash Virtualization Platform“ written a few months back. I’m really stoked about the earth shattering impact that revolves around server side caching solutions. Logically, the compute layer is extremely well served by having a strong locality with high performance flash in many different use cases. While each vendor has a bit of a unique perspective on how to accomplish the safe handling of data, especially as it relates to write-back caching, the vast benefits of physical locality remains the same.
Satyam Vaghani was also on site at Storage Field Day 3 to present PernixData FVP to me and the other delegates. The recorded presentations are an excellent source of knowledge from the guy who both invented VMFS and was the VMware Storage CTO!
Contents
The Nuts and Bolts
FVP is a logical cluster of server side flash, be it PCIe cards or SSDs, to transparently serve read and write requests for vSphere workloads (virtual machines). The VMs are completely ignorant to the fact that their storage IO path is being accelerated (this is a good thing). As data is written to the flash cluster in a write-through mode, meaning data hits the flash layer but is ultimately acknowledged after landing on the storage array, the data can be later read directly from the flash cluster. This avoids any need to hit the storage array for data a second time, which frees up the storage array to handle other IO requests and reduces round trip latency to the virtual machine.
Additionally, the FVP flash cluster protects itself against failure when used as a write-back cache by making sure that the writes are replicated to different vSphere hosts (flash nodes) in the cluster. Instead of waiting for the storage array to commit the IO, the flash cluster acknowledges the write. This provides a magnitude of latency reduction, or more, and has the added benefit of completely eliminating any latency caused by the storage array. The flash cluster can flush the data to the storage array and “soak up” the latency that would normally penalize the virtual machine. Typically, latency to the array is caused either by disk wait states (the disks are busy) or the head units / storage processors being busy with throughput or queued IO.
Wahl Network Lab
While not being a full on production data center, I will share that the Wahl Network Lab has been running PernixData FVP since around March of this year. It has slowly transitioned from a place where I only put workloads I didn’t mind losing, to hosting more sensitive data, to now holding anything that I care about (production data) without fear.
In fact, my entire vCloud Director lab that I created for the TrainSignal vCloud Director 5.1 Essentials course was secretly running on top of FVP. On the back end, all of my virtual machines were served by 3x 7200 RPM 1TB SATA drives in RAID 5 – meaning there are only 2 disks for data – with a single Kingston HyperX SSD in each host. The results?
I take FVP for granted. I tend to forget that it’s even on and rarely look at it. To me, that’s the ultimate compliment. There’s no tuning, fiddling, or worrying about it. It just works – and I’m still running the beta code!
Beyond my own lab, Eric Shanks has posted some very detailed benchmark and configuration details over at The IT Hollow – definitely worth a look.
Thoughts
PernixData is leveraging the channel for those looking to sink their teeth into their server side flash solution, with enterprise pricing starting at $7500 per host (unlimited VMs). I think the price point may fluctuate a bit, depending on the value users get out of the deployment, and also noticed that FVP will have pricing for SMB and service providers. I would imagine the difference for most folks will be the VMs per host – as an SMB may opt for smaller hosts with fewer VMs, and an enterprise or large commercial account may be able to acquire some monster hosts that pack 50+ VMs on them.
A big congratulations to the PernixData team! If you can put your hands on some PCIe flash or SSDs, I’d strongly suggest giving FVP a whirl using their 60 day trial.
And for those curious, yes, I have a goofy name for my FVP cluster. See below. 🙂