Outside of being a quiet member in the Nova compute project and a self titled fledgling Python programmer, I would consider myself a lurker when it comes to OpenStack. I was glad when I saw the release of VOVA a while back, which is the VMware OpenStack Virtual Appliance, as it provided a simple method for getting a foot (or two) wet into the world of open source cloud management. There’s also been a slew of superb blogs written by Scott Lowe that I gravitate towards (highly recommended reading here, folks).
While some seem to think that OpenStack is a “vSphere killer” and often confuse it with a hypervisor, others have been a bit more open minded and can see ways that the two can work together. The mammoth sized install base of vSphere, coupled with the amount of skilled professionals who are employed by the enterprise to work with VMware solutions, makes tighter integration between OpenStack and the vSphere food groups (management, network, storage, and compute) a win-win.
To this end, VMware is announcing VMware Integrated OpenStack (VIO), to which VMware’s Arvind Soni, Senior Product Manager, and John Shao, Senior R&D Manager put is an industry-wide API + tool ecosystem that cloud application developers love [OpenStack] with an industry leading data center virtualization technologies that enterprise IT already knows how to operate [vSphere]. I’ll walk through the integration evolution and VIO details in this post.
Contents
Current Integration Talking Points
vSphere offers all of the various technology silos necessary to build out a virtual infrastructure through the vSphere platform and related products. This boils down to using the ESXi hypervisor for compute, VMFS for block storage, NFS for file storage, Virtual SAN for server-side storage, VMware templates or vCAC blueprints for images (I might also lump vApps in here as part of vCloud Director), and the use of NSX-mh (multi-hypervisor) along with an altered version of the Distributed vSwitch (often called the NVS, NSX vSwitch, or NSX Virtual Switch) for networking. These silos roll up into the management platform, vCenter.
Below you’ll see the integration points into the OpenStack counterparts: Nova (compute management), Neutron (networking), Cinder (block level storage), and Glance (images / templates). You’re on your own for object storage (Swift), although some next-gen storage companies (such as Coho Data) are simply hiding their object storage system beneath the covers of file or block storage access.
Still, having to learn all of these components is a bit daunting for many infrastructure folks who wish to create an on-premises private cloud for their AppDev teams to easily consume and prosper upon. I’m not saying it’s impossible, but certainly a heckuva lot more challenging than the next-next-finish wizard-like polish that vSphere offers. After all, there’s a good number of consulting and services firms that make a pretty penny from handling the install, configuration, and initial workflow creation steps for IT shops. And upgrades have been, well, non existent for quite a few builds.
This is a good point to take a breather and mention Eric Wright’s excellent Pluralsight course on OpenStack entitled Introduction to OpenStack. You are a Pluralsight subscriber, right? 🙂
Keep in mind, also, that Nova (the compute manager) thinks of a cluster within vCenter as a “blob” of compute resources. Nova is not, as many incorrectly surmise, a hypervisor. It is ultimately up to vCenter’s various mechanisms, such as the Distributed Resource Scheduler (DRS) and its varying degrees of configuration and rules, to determine on which ESXi host a workload will run. This can be shown a little clearer below with a special call-out to the vCenter Driver:
Cinder, the block storage component within OpenStack, also uses a special driver – called the VMDK Driver – to complete its various provisioning and removal tasks. As shown below, the request to create a new volume is passed along to the VMDK Driver and given to vCenter Server. This two-stage process involves creating a shadow VM, which is a type of placeholder, for the initial storage creation. Once the VMDK is ready, it is swapped over to the virtual machine that is intended to use the storage.
This bit of provisioning footsie allows vSphere to maintain control over much of the process, and enables the rich set of features to function – such as Storage DRS, Storage vMotion, and the use of VAAI (vSphere APIs for Array Integration). Requiring users to disable the vSphere features is often a non-starter for pretty much anyone who paid for them.
VMware Integrated OpenStack (VIO)
In a few words, VIO is the combination of the vCloud Suite with OpenStack. In fact, I found it to be very similar in concept to the VOVA (again, that’s the VMware OpenStack Virtual Appliance) that I mentioned at the beginning of this post.
VIO delivers standard, open source OpenStack code along with tools to install and operate OpenStack using an OVA, which is a packaged virtual appliance. The OVA is referred to as the OpenStack Management Service VM and provides the management, installation, and configuration heavy lifting on your behalf.
Let’s start with a high level overview, where OpenStack components (and open source drivers) are highlighted in orange, while vCloud Suite components are blue:
First off, I like that there’s no lock-in for Virtual SAN (VSAN) in this diagram. A VMDK is a VMDK, and you can choose how you wish to present and consume storage with VIO. One piece of the puzzle, NSX, might be a bit tricky to play with today because getting the code for NSX has historically required a white-glove-esque engagement with VMware’s PSO team. I hope that changes in the near future, but in the meantime, I fully plan to get this operational in my home and work lab environments using the NSX 6.1 code and share what I can.
VIO appears as an object in the vSphere Web Client’s Home menu, much like vCenter, vCHS (now vCloud Air), vCenter Orchestrator, and other products do.
Once you’ve deployed the VIO management appliance into your environment, you’ll use it to stand up your OpenStack environment. The initial decision point is to identify a management cluster for dropping all of the OpenStack management components. Nothing earth shattering here, as I always try to include some sort of management cluster for out-of-band management of cloud resources whenever possible.
VIO will build out a virtual machine with the OpenStack services in your management cluster, along with a load balancer, message queue (MQ), and a database. I’ve been told that these components are able to scale out as required by an enterprise class deployment (which often means quite wide), and have the added bonus of providing a highly available CMP (Cloud Management Platform). VIO then uses the non-management clusters, often referred to as tenant clusters or resource clusters, as pools of resources that are made programmatically available to the AppDev teams (or whomever needs them, really). And again – the same features that vSphere admins enjoy having enabled and operational today – DRS, HA, SDRS, vMotion, etc. – are all supported.
The Greater Ecosystem
In addition to having VIO for a simplified method for deploying OpenStack and taking advantage of various vSphere integration points, there’s also a number of other ecosystem synergy available.
- vCenter Operations Manager, or vCOps, has a management pack that works with OpenStack. The management pack will monitor and alert on the health of OpenStack services, such as the Glance image service stopping because storage has reached capacity. Alternatively, you could use RBAC to drop in some tenant dashboards as a value add, such as health and workload metrics around lines of business that are consuming cloud resources.
- LogInsight OpenStack Content Pack. One of my favorite VMware tools, LogInsight, also comes with dashboards and centralized log analysis features for OpenStack. This includes the various log files that are generated and updated by Nova and Cinder, error rates, API response times, and so on.
I applaud all of the work that VMware is putting into truly integrating their products (and others) into these two excellent tools. This only increases the synergy between tools and boosts the value that can be gleamed from the so called software-defined data center (SDDC) vision.
Support Statements and Beta Access for VIO
One final note. VMware did comment that they will provide support for both OpenStack and the underlying infrastructure and management tools used throughout the lifecycle. Specifically:
- OpenStack code delivered by VIO
- Installing, building, and configuring OpenStack with VIO
- Operational issues related to monitoring, managing, and diagnosing
- Maintaining the stack, along with upgrades and patches
VMware also plans to test and validate each release of OpenStack code to ensure that it will function as desired on their VIO platform.
You can snag a beta copy of VIO beginning today by way of a private beta program. At this time, I only know that the program will be limited in size and that official announcements on how to join will be released later today at VMworld.