VMware Announces Software Defined Infrastructure with EVO:RAIL

Wow, the world of converged infrastructure sure is on fire. It seems that everyone is looking to stuff all of the data center food groups into an appliance-like node for simpler data center architecture models. Further validating this idea, VMware has entered the ring with their hyper-converged infrastructure offering called EVO (check out the landing page, here). For a moment, however, let’s take a step back and look at the various tiers of convergence that are available to data center customers today:

  1. Custom Builds – These are the traditional, build-your-own style data centers, in which engineers or architects in the organization take the silo components – network, storage, compute, and management – and cobble them together using a combination of experience and know-how.
  2. Converged Infrastructure – Offerings that combine the components from Custom Builds into an offering, either as a product (such as Vblock) or as a reference architecture (such as FlexPod). The benefit is that most of the architecture challenges have been solved for the consumer, and in the case of Vblock, is offered directly from the factory as a set of cabinets fully racked and stacked.
  3. Hyper-Converged Infrastructure – This is usually a clean-slate approach that uses COTS (commodity, off the shelf) components to build a node that contains the compute power, storage, and upstream network interfaces. Nodes are pieced together into a seamless fabric using dedicated or shared network interfaces, varying from InfiniBand to traditional Ethernet, to look and feel like a single, logical entity. There is still a need to plumb these nodes into the physical or underlay network, typically a top of rack (ToR) or end of row (EoR) grid that connects up into a leaf-spine or three-tier topology.

Or, in VMware’s words, see below:

Eenie meenie miney mo, catch a data center by it's toe.
Eenie meenie miney mo, catch a data center by it’s toe.

EVO looks to play in that third space, the Hyper-Converged Infrastructure offering, which eliminates a vast amount of architecture decisions around how to build a pool of resources for applications to consume. Incumbents in this space include Nutanix’s Virtual Computing Platform and SimpliVity’s OmniCube. By the way, both companies I just mentioned have been presenters at Tech Field Day’s in the past.

Exposing the EVO

evo-hcia

Get ready for some new acronyms. I’ve already spilled the beans on Software Defined Infrastructure in the headline, but now we also have Hyper-Converged Infrastructure Appliance, or HCIA. Under the covers, VMware has taken a COTS approach and layered on VMware vSphere along side Virtual SAN (VSAN). This provides all of the software bits necessary to put together an appliance offering that VMware plans to support end-to-end with a “one support call” model. EVO is sold as a single SKU – this includes the hardware, software, and SnS (support/maintenance) – to make the procurement model relatively less painful. We all know procurement will never be completely painless for most enterprise. 😉

Each HCIA (again, that’s the Hyper-Converged Infrastructure Appliance for those playing at home) is a 2U chassis with 4 nodes inside. The version 1.0 release will allow for 4 HCIAs to be put together, resulting in 8 RU of rack space and 16 nodes worth of EVO. That’s half the size allowed by the vSphere 5.5 cluster maximum, so I would imagine that the number will grow at some point beyond the 1.0 release. At least, let’s hope so.

EVO Use Cases

If you’re curious what VMware is targeting for EVO, it boils down to just about everything. Here’s a slide that could have easily been renamed to “all your base are belong to us” and not been far off the mark.

All your data center are belong to us.
All your data center are belong to us.

I did notice that business critical applications, which are typically big SQL / Oracle / SAP / BI and so on, are missing from the list. This is an area that Nutanix often touts as an advantage with their 8000 series nodes. I’d say there’s still a gap here, but one that may not make a huge difference from day one for most customers depending on their targeted deployment strategy and road map.

Simplicity with the EVO:RAIL Engine

Although the components within RAIL are the vSphere bits you know and love, there’s an additional component that turns EVO:RAIL into a product. That’s the EVO:RAIL Engine. It’s essentially a front-end interface into the product and uses … HTML5! At long last, no more Flex or Java required – thank you, VMware. 🙂

You run through a wizard of sorts to stand up the solution using one of three options:

  1. Just Go! – With Just Go!, EVO: RAIL automatically configures the IP addresses and hostnames that you specified when you ordered EVO: RAIL. Configure your TOR switch and click the Just Go! button. All you have to create are two passwords.
  2. Customize Me! – When you customize EVO: RAIL, all required configuration parameters are supplied for you by default, except for ESXi and vCenter Server passwords. Customize Me! allows you to easily change the defaults.
  3. Upload Configuration File – With Upload Configuration File, an existing json configuration file can be selected and uploaded.

The wizard asks questions about the host names, networking configuration (VLANs and IPs, etc.), what passwords to use, and other things that I’m sure you can imagine being required. In fact, here’s a screenshot of the wizard or you can watch one of several videos that VMware has cooked up.

EVO:RAIL likes sunsets, long walks on the beach, and complex passwords
EVO:RAIL likes sunsets, long walks on the beach, and complex passwords

After completing the wizard, you get a snazzy little build process indicator that shows a high level workflow around what the engine is doing.

Building your SDDC, including vCenter, with a friendly wizard.
Building your SDDC, including vCenter, with a friendly wizard.

Once completed, you get a very happy completion screen that lets you log into EVO:RAIL’s management interface.

Excited EVO is excited.
Excited EVO is excited.

Once logged in, you are presented with a dashboard that contains data on the virtual machines, health of the system, configuration items, various tasks, and the ability to build more virtual machines. Notice that the configuration screen also includes build versions of vCenter, ESXi, and EVO:RAIL, along with the ability to license the product and push offline updates (important for those without an interface facing connection) to the EVO.

Very simple management over deployed versions of vSphere and EVO
Very simple management over deployed versions of vSphere and EVO

Building virtual machines is also done through EVO’s interface, with more friendly graphics to walk you through the process.

Naming, size, networking, and security settings are all available when building a VM
Naming, size, networking, and security settings are all available when building a VM

Folks worried about day-to-day operations of the EVO:RAIL can use the Health tab to see how all of the nodes within the HCIAs are doing.

A holistic look at the nodes within the HCIAs of EVO:RAIL
A holistic look at the nodes within the HCIAs of EVO:RAIL

Hardware Components

The EVO:RAIL comes with pre-defined hardware components listed below:

  • Per HCIA
    • 24 hot plug 2.5 drives
    • Dual PSUs ~1600W
  • Per Node
    • Dual socket Intel E5-2620v2 (6 cores)
    • Up to 192 GB of RAM
    • 1 x Expansion Slots PCI-E: Disk controller with pass through capabilities (Virtual SAN requirement)
    • 1 x 146 GB SAS 10K-RPM HDD or 32 GB SATADOM (ESXi boot)
    • 1 x SSD up to 400 GB (Virtual SAN requirement for read/write cache)
    • 3 x 1.2 TB SAS 10K-RPM HDD (Virtual SAN data store)
    • 2 x Network – 10 GbE RJ45 or SFP+
    • 1 x Management RJ45 – 100/1000 NIC

Adding a new HCIA involves cabling the appliance and then letting EVO:RAIL detect and connect. The rest is handled for you. You can only add one appliance at a time in release version 1.0.

Green checkmarks are good, right?
Green checkmarks are good, right?

Network Layout

The virtual switch is configured for 2 vmnics (vmnic0 and vmnic1) with pretty much all traffic using vmnic0. The only thing that uses vmnic1 is the VSAN traffic. Specifically:

  • Management – Active / Standby
  • vMotion – Active / Standby
  • Virtual SAN – Standby / Active
  • EVO:RAIL Management – Active / Standby
  • VMs – Active / Standby

Here’s another way to look at it:

Dark blue is Active, light blue is Standby
Dark blue is Active, light blue is Standby

This configuration will require that you have provided a 10 GbE top of rack (ToR) switch for connectivity, as well as the following:

  • IPv4 and IPv6 multicast must be enabled on all ports on the TOR switch. When using multiple TOR switches, ISL multicast traffic for IPv4 and IPv6 must be able to communicate between the switches. (EVO uses IPv6 for auto-discovery)
  • Configure a management VLAN on your TOR switch(es) and set it to allow multicast traffic to pass through.
  • To allow multicast traffic to pass through, you have 2 options for either all EVO: RAIL ports on your TOR switch or for the Virtual SAN and management VLANs (if you have VLANs configured):
    • Enable IGMP Snooping on your TOR switch(es) AND enable IGMP Querier. By default, most switches enable IGMP Snooping, but disable IGMP Querier.
    • Disable IGMP Snooping on your TOR switch(es). This option may lead to additional multicast traffic on your network.

Here’s an example ToR configuration to set up the EVO:RAIL:

I would assume most folks would actually use two ToR switches for redundancy.
I would assume most folks would actually use two ToR switches for redundancy.

Thoughts

I’m quite impressed with the interface for EVO for a number of reasons. It uses HTML5, is very simple and friendly to use, and offers a number of ways to stand up through either pre-determined configs or user-driven answers. It will be very interesting to see how the pricing stacks up against other converged or hyper-converged solutions, and which use cases end up shining strong for customers that are looking to simplify their IT infrastructure.