Man Alive! PernixData Unleashes FVP 1.5

Manage Tab

Much like what it sounds like, the Manage tab lets me configure flash devices and datastores (or VMs) that will take part in my flash cluster. I can add or remove flash devices as they are cycled in and out, along with changing which datastores or VMs should be accelerated. Keep in mind that I can configure per-VM settings for both write-through and write-back, or just add any VM that lives on a particular block datastore. I can also black list a VM on a datastore to avoid accelerating it. The options are quite granular.

Here’s an example where I’ve already added a datstore called NAS1-PernixData. I’ve labeled the datastore this way so that I know which lab machines I’m accelerating, but it’s certainly not required – you can use any existing block storage backed datastore. If I wanted to add additional VMs, I just click the Add VMs… link and choose which ones I want to accelerate, and then pick the write policy.

Accelerating a new VM workload
Accelerating a new VM workload

Because I’ve chosen Write Back above, I can also pick a write redundancy value. This is essentially N plus 0, 1, or 2 copies. Because I value my data, I’ll go ahead and pick Local flash and 2 network flash devices. Note that a network device is a flash device located on another host to protect the VM from a host failure, rather than only using local flash devices and potentially defining the local host as a single point of failure.

Other Web Client Objects

FVP isn’t limited to just the cluster options. Details on your flash cluster are also included in the vSphere host, cluster, and VM views. At the cluster level, you get an overall summary of the number of FVP clusters, devices, and flash capacity available as shown here:

FVP information on a vSphere cluster
FVP information on a vSphere cluster

Drilling down to a VM gives you real time details on the number of IOPS, throughput, latency, and flash hit rate:

A real time look at the FVP performance for a VM
A real time look at the FVP performance for a VM

Choice of FVP Cluster Networks

FVP 1.5 offers further granularity for selecting the flash cluster network. This network is used by the flash cluster for offering write redundancy and allowing VMs to pull data from a network flash device, which is often an order of magnitude faster than a latent storage array. FVP defaults to using your vMotion network, but if that doesn’t make sense for your environment, FVP 1.5 offers the ability to select a unique network on each vSphere host as shown here:

Selecting new networks for your FVP flash cluster
Selecting new networks for your FVP flash cluster

You can also decide to use one network for all hosts if they are the same across hosts or return to the default of any vMotion network:

Choices for your network configuration
Choices for your network configuration

Having the flexibility to change the acceleration traffic network is key to offering some really nifty options for converged infrastructure or special upstream network environments.

Peer Node Transparency for Write-Back

Let’s dig in deeper to a feature I already touched on a little bit earlier in this write up. FVP 1.5 lets you easily see which hosts are providing local or remote flash devices for your VM workload. The local host is shown as being the Primary, while remote hosts are shown as Write Back Peers. In my 3 host lab, every host is either a Primary or Write Back Peer, but in environments with 4 or more hosts this will not be the case.

Primary and Peer hosts for this VM workload
Primary and Peer hosts for this VM workload

There are also two new states revealed with FVP 1.5 that we’ll cover in the next section.

New States for Failure Indication

Failure happens – vSphere hosts can purple screen, flash devices fail, and someone can accidentally spill Jolt cola all over your servers. (Remember Jolt cola?). FVP has two new states for revealing what’s going on behind the scenes when a failure occurs: Sync Required and Sync in Progress.

Having a VM in write-back mode means that data is acknowledged at the flash pool layer before it is ultimately flushed down to the storage array. Thus, the storage array is always a little bit behind on any current write activity. In a situation where all of the hosts that contain write-back data have failed – which depends on your write redundancy policy of 0, 1, or 2 peer hosts – FVP will place the VM in a Sync Required status and pause vSphere High Availability (HA) from powering on the VM. Otherwise, the VM would be out of sync from the latest writes that were issued to FVP.

When one or more hosts containing a flash device that holds the missing writes comes back online, FVP will sync the data back to the storage array and switch the VM to a Sync in Progress status. This includes an ETA as to when FVP feels it will be done syncing the writes. Once the writes have been synchronized to the storage array, FVP will allow vSphere HA to power on the VM.

Thoughts

I’m really quite impressed with all of the new features included with PernixData’s FVP 1.5 – especially their dedication to integrating with the Web Client in 5.5. If you want to try out FVP on your own, they offer a free trial for you to kick the tires in your lab or work environment.