I’ve always had a fascination with hyperconverged infrastructure, as it seems to target a lot of pain points in today’s data center. The term “hyperconverged” loosely translates into a platform that tucks in all the various components (storage, server, network, hypervisor) into one single box. Traditionally, the goal is to eliminate as much complexity as possible, while allowing for a more scale-out type of architecture.
Note: I’ve written about the idea of hyperconverged infrastructure as an Ambassador on Thwack, and it generated some really interesting commentary.
While at VMware Partner Exchange, I managed to lock eyes with an interesting offering in this space from a company named SimpliVity. They had a sharp looking 2U rack mount server, named the OmniCube, in a variety of site configurations for demonstration. Knowing that a brief demo in the crowded solution’s exchange wouldn’t be enough to satisfy my hunger for a deep dive, I scheduled a more “hands on” demonstration to get the skinny on what makes this offering unique and desirable for virtualization administrators.
Managing Logical Resources
As you might expect from a hyperconverged offering, the OmniCube solution logically pools the storage resources to present a single pool of capacity to the virtual workloads. The back end physical disks perform the necessary heavy lifting to ensure that the data is properly replicated to avoid a single point of failure, and as such, the solution size starts at a minimum of two OmniCube servers. Each OmniCube server contains a small service VM, called the Virtual Controller, for handling the secret sauce. The SimpliVity solution integrates directly into the vSphere client as a plugin and is essentially managed by means of a tab, same as you would configure other resources in vCenter.
If you look at the screenshot above, you’ll notice that the focus is on total logical capacity and consumption on a per VM level. The goal is to get away from managing individual servers and instead focus on the Federation of servers. While it is possible to get host level statistics, the idea is that you will not have to do this.
SimpliVity’s Underlying Data Architecture
You may also have noticed the statistics for deduplication, compression, and efficiency. One of the key points that SimpliVity made clear is that the solution was natively designed for in-line deduplication at all levels: DRAM, Flash, HDD, and Cloud targets. There is no post-process job to run to squeeze out those duplicated blocks. The feature is just a part of the SimpliVity underlying data architecture, called OmniStack.
This is one of the areas I find most interesting about the solution. Each OmniCube server is equipped with a PCIe module, called the OmniCube Accelerator, for handling in-line deduplication and compression. In effect, the heavy lifting is offloaded to a specific piece of hardware, which alleviates any need to dedicate resources away from the server’s CPU or RAM. Additionally, the deduplication information is spread across all nodes in the Federation. This means that you get the savings of deduplication across all of your nodes regardless of physical locality in a 4K or 8K block size.

Deduplication provides a lot of benefits beyond just saving capacity. It reduces the amount of data that needs to be written to disk, read from disk, sent across WAN links for site-to-site replication, and can potentially increase performance from any cached data (think N virtual machines all needing a common set of deduplicated blocks in cache). It also lowers your expenses if you decide to set up an OmniCube target in the cloud, such as Amazon’s EC2, because the cost per GB in that space is extremely expensive.
Global Federation
Now that I had spent a reasonable amount of time poking around the OmniCube solution at a single site level, it was time to expand into a multi site solution. For this demo, we had two destinations available: San Francisco and London. Because the Federation of OmniCubes act like a logical entity, the idea of protecting virtual machines for backups or Disaster Recovery (DR) is crazy kinds of simple. The gist is this:
- You create a policy rule to determine how often a VM is backed up, how long to retain the backup, and where to send the backup
- You then apply those policies to any VMs you wish
- The policy enforces the rules
Backups
Below is an example where we are creating a backup policy rule called “Silver” that will take a backup every 5 minutes of a VM, retain them for 4 hours, and store them at San Francisco. The backups can either be crash consistent or application consistent using VMware tools quiescence: you choose what makes sense for your workload. You could have other rules called “Gold” for longer retention or that are stored over in London, or perhaps “Bronze” for a longer frequency (such as every hour).
Migrations
You can also leverage the OmniStack technology to migrate an offline VM over to another site. It’s really just a matter of choosing the destination data center and storage.
All of the SimpliVity tasks show up in the Tasks list. Below you can see the VM moving over to another data center, along with a history list of all the backups and what data center they live on. Because only the net-new deduplicated blocks must traverse the WAN, the operation is less intensive on the throughput than a full copy would be, and the task completes rather quickly. Of course, keep in mind that this is a lab / demo environment and the change rate is most likely lower than your production server virtualization environment. 🙂
Thoughts
SimpliVity was very open with their product and provided some great levels of detail to all of my questions. While I may have stumbled upon a few items that will be revealed in the future, I think a very large chunk of what a virtualization enterprise needs is baked into the OmniCube offering today.
So – what do you think? Is SimpliVity on the right track? Have you worked with their solution and want to share your experiences?