SimpliVity Cranks The Hyperconverged OmniCube Up Another Notch

Hyperconverged is a rather hot space in the data center market, with a healthy amount of offerings for the SMB, commercial, or enterprise consumer to choose from. The term “hyperconverged” refers to a modular platform that provides all of the major four food groups – storage, network, compute, and hypervisor – all rolled up into a single, scalable platform. The ultimate goal is to avoid the necessity of having silos of skills to design, build, and maintain various segments of the virtualization stack, while at the same time empowering the business to support a more on-demand growth model.

I first talked about SimpliVity back at VMware Partner Exchange, getting a really great in-depth look at the product in a very hands-on manner. The company has some exciting tech that moves well beyond the perspective of blending hardware together. Although they have the speeds and feeds safely secured, the real DNA magic of their OmniStack (the IP bits under the covers) is focused on global federation, data optimization, and physical abstraction. I recently caught up with the team to discuss their 2nd generation release, which hit the wire on August 19th.

Managing Logical Resources

As you might expect from a hyperconverged offering, the OmniCube solution logically pools the storage resources to present a single pool of capacity to the virtual workloads. The back end physical disks perform the necessary heavy lifting to ensure that the data is properly replicated to avoid a single point of failure, and as such, the solution size starts at a minimum of two OmniCube servers. Each OmniCube server contains a small service VM known as the SVT, called the Virtual Controller, for handling the secret sauce and presenting the global pool of storage. The SimpliVity solution integrates directly into the vSphere client as a plugin and is essentially managed by means of a tab, same as you would configure other resources in vCenter.

omnicube-federated

Looking at the screenshot above, you’ll notice that each vSphere data center, along with a hook into Amazon AWS, are repsresented as part of the overall Federation of sites. There’s no real focus on physical constructs, like traditional LUNs, because the entire hardware layer has been abstracted away and is dealt with by the OmniStack solution. Focus is placed on more important matters, such as the throughput and replication between sites, along with the various policies being used to protect virtual machines.

SimpliVity’s Data Center Management

Beyond looking at the overall Federation, there’s also a really sharp set of data available at the Data Center level. In the example below, we took a look at the Barcelona Data Center and can see four very important metrics: logical capacity, physical capacity, virtual machine details, and performance.

1024-DC[2]

One of the key points that SimpliVity made clear is that the solution was natively designed for in-line deduplication at all levels: DRAM, Flash, HDD, and Cloud targets. There is no post-process job to run to squeeze out those duplicated blocks. The feature is just a part of the SimpliVity underlying data architecture (OmniStack).

The OmniCube Accelerator

omnicube-dveEach OmniCube server is equipped with a PCIe module, called the OmniCube Accelerator, for handling in-line deduplication and compression – the folks at SimpliVity refer to this as the Data Virtualization Engine (DVE). In effect, the heavy lifting is offloaded to a specific piece of hardware, which alleviates any need to dedicate resources away from the server’s CPU or RAM.

When a write is issued by a virtual machine to the NFS datastore, the SVT captures the write and ensures that it is sent to a local and remote OmniCube Accelerator. This protects the data from any single points of failure, and allows the Accelerator to acknowledge the write back to the virtual machine. The write is then serialized and coalesced by the Accelerator before being written to disk, which solves the I/O blender issue (where many virtual machines are all hitting disk in a random manner). Another huge advantage of using a PCIe flash device is that there is a very long life expectancy when compared to an SSD.

Deduplication

Deduplication information is spread across all nodes in the Federation. This means that you get the savings of deduplication across all of your nodes regardless of physical locality in a 4K or 8K block size. It also feeds into superior copy and backup policy efficiency: there is no need for your traditional pointer based snapshot system – the data is already deduplicated – and so each copy or backup can stand on its own without pointers back to a historical parent object or block.

Each server is equipped with 4 SSDs and 8 HDDs

Deduplication provides a lot of benefits beyond just saving capacity. It reduces the amount of data that needs to be written to disk, read from disk, sent across WAN links for site-to-site replication, and can potentially increase performance from any cached data (think N virtual machines all needing a common set of deduplicated blocks in cache). It also lowers your expenses if you decide to set up an OmniCube target in the cloud, such as Amazon’s S3, because the cost per GB in that space is extremely expensive.

Global Federation

Now that I had spent a reasonable amount of time poking around the OmniCube solution at a single site level, it was time to expand into a multi site solution. For this demo, we had two destinations available: San Francisco and London. Because the Federation of OmniCubes act like a logical entity, the idea of protecting virtual machines for backups or Disaster Recovery (DR) is crazy kinds of simple. The gist is this:

  1. You create a policy rule to determine how often a VM is backed up, how long to retain the backup, and where to send the backup
  2. You then apply those policies to any VMs you wish
  3. The policy enforces the rules

Backups

Below is an example where we are creating a backup policy rule called “Silver” that will take backups of a VM every 5 minutes, retain them for 4 hours, and store them at San Francisco. The backups can either be crash consistent or application consistent using VMware tools quiescence: you choose what makes sense for your workload. You could have other rules called “Gold” for longer retention or that are stored over in London, or perhaps “Bronze” for a longer frequency (such as every hour).

omnicube-backup-policy

Migrations

You can also leverage the OmniStack technology to migrate an offline VM over to another site. It’s really just a matter of choosing the destination data center and storage.

omnicube-move-vm1

All of the SimpliVity tasks show up in the Tasks list. Below you can see the VM moving over to another data center, along with a history list of all the backups and what data center they live on. Because only the net-new deduplicated blocks must traverse the WAN, the operation is less intensive on the throughput than a full copy would be, and the task completes rather quickly. Of course, keep in mind that this is a lab / demo environment and the change rate is most likely lower than your production server virtualization environment. :)

omnicube-move-vm

New Models

SimpliVity has also revved up their product offering to give three different choices, with a focus on use case:

  • CN2000 – intended for smaller environments, such as a SMB or ROBO (remote office / branch office).
  • CN3000 – this is the flagship model that has always been offered, but it has been beefed up to keep pace with the modern hardware options that exist in the market today.
  • CN5000 – the new big boy on the block, this “ultra high end” model is meant to house the monster VMs and VBCA (virtualized business critical applications) that keep the company running.

The full sheet with technical specs is below. Note that all of the products are 2U in size and come with all of the OmniStack goodness baked in  – there’s none of the runaround with a la carte feature purchases.

omnicube-offerings

Thoughts

SimpliVity, which was founded back in 2009, looks to be on fire from both a technical hiring perspective (they picked up my good friend Gabriel Chapman and many others) and from a technology perspective. I’m especially impressed with their ability to abstract and federate the hardware and data bits across the solution, as much of their IP can help eliminate the need for a hodgepodge of different products that are commonly used to solve a variety of problems.

This becomes especially apparent in a ROBO type use case, as the amount of bandwidth and hardware footprint space is traditionally quite limited – making a small, modular, and WAN friendly solution quite appetizing. Imagine having a handful of satellite offices to supply services to – such as the standard active directory and application servers – while also focusing on a backup and DR strategy. These are typically lights out operations, meaning no staff is on site, and any troubleshooting and remediation usually falls on a business person who is near the server closet door and has the keys, or taking a long trip yourself. Simplifying the hardware footprint and potentially eliminating the need for WAN accelerators, a SAN infrastructure (either FC or iSCSI), and backup appliances means less complexity and thus less to worry about.

It also has ramifications for support, maintenance, and hardware refresh cycles. There is no need to co-term a pile of equipment from either a maintenance contract or support perspective. At scale, this can feed into a pretty attractive reduction of overhead around keeping track of what has been deployed and making sure that it’s all properly under support. The OmniCube equipment would be tethered into the vSphere environment – this would become a point for management control, policy distribution, and monitoring. I think there’s a lot of value in having such a holistic look at the organization, especially when the ROBO environment is folded into the same control plane as the data center, which further reduces skill silos and black holes of control.