Exploring VMware VSAN Ready Nodes, Per-Socket Pricing, and Design Guides

The buzz over VMware’s Virtual SAN (VSAN) is hot. According to VMware, over 12,000 beta participants have beat on the VSAN solution in pre-production or lab environments to gain further understanding, test performance, or see how the solution may (or may not) fit into their data center strategy. And, with the recent announcement that VSAN will scale out to the size of the configuration maximum for a vSphere Cluster – 32 nodes – there’s no need to make oddly sized clusters to meet the constraints that were formerly published. For those playing the VSAN config-max game at home, it used to be 8 nodes max with 16 nodes expected in the future.

Storage policy-based management (SPBM) with VSAN
Storage policy-based management (SPBM) with VSAN

With VSAN now reaching General Availability (GA), I had many different directions I could take for this post. Ultimately, I chose to review the raw facts and provide details around building and pricing out compute nodes that use VSAN, rather than pontificate on where VSAN sits in the grand scale of storage performance against other vendors. In my opinion, this is a bit difficult to judge on a beta product that just now released, and is something I will go into further for production use cases I encounter (and design around) down the road.

Understanding Virtual Data Services with VSAN

In the grand scheme of Virtual Data Services, VSAN sits in the hypervisor-converged storage pool portion – adjacent to other options like a SAN (Storage Area Network) or NAS (Network Attached Storage) and object storage pools (think Amazon S3 or OpenStack’s Swift). This “new” tier in the stack references the fact that the hypervisor is responsible for the placement and presentation of storage rather than simply presenting someone else’s storage as with the other options. There is no need for a virtual machine to provide access or control data flows as with the older VSA model.

Virtual SAN fits into the hypervisor-converged storage pool section
Virtual SAN fits into the hypervisor-converged storage pool section

One of the advantages touted for VSAN is the ability to use new or existing x86 servers with enterprise class or commodity off-the-shelf (COTS) hardware. Let’s go over this from a node building perspective.

Buying or Building Virtual SAN Nodes

There are two main ways to create a VSAN Node – you can either buy a pre-configured server that is using components architected for VSAN or build your own node from a list of components that are included on the VMware VSAN HCL.

VSAN Ready Nodes

Are you excited to see Supermicro on the slide below? 🙂

Option 1: Buying a VSAN Ready Node
Option 1: Buying a VSAN Ready Node

This option will appeal to those looking to use a repeatable, modular format provided by a vendor that most likely wraps the solution with support and delivery. I would guess that time to delivery and relying upon a vetted architecture will play into a decision to buy a VSAN ready node. VMware has published a PDF with recommended configurations that include a system type, CPU, memory, flash device, HDDs, SCSI controller, NIC, and virtual machine quantity. Here’s an example:

An example VSAN config with a Dell R820
An example VSAN config with a Dell R820

The following servers are outlined in the doc:

  • Dell PowerEdge R820
  • Cisco UCS C240
  • Supermicro SuperServer 1018D-73MTF
  • Supermicro SuperServer F627R3-R72B+
  • IBM Lenovo x3650 M4 HD CTO Option D, feature A4M5

[symple_button color=”blue” url=”http://partnerweb.vmware.com/programs/vsan/VMW-VIRTUAL-SAN-RDY-NODE-RDY-BLK-v1.0.pdf” title=”Ready Node and Ready Block Recommended Configurations” target=”blank” border_radius=””]Download the PDF[/symple_button]

Build Your Own

The other option is a bit like a build your own Grand Slam breakfast from Denny’s (are you hungry now?) as shown below:

Option 2: Building Your Own VSAN Node
Option 2: Building Your Own VSAN Node

We all know the premiums placed on components when you buy from a vendor – especially when packaged together. It may make more sense to build your own nodes to meet a very specific set of requirements or circumvent some constraints. Additionally, there will always be that use case where it makes sense to supplement an existing cluster of compute nodes with the missing components (most likely SSDs and HDDs) to transform it into a VSAN cluster. Perhaps this model will better suit the more scale-out focused folks who want to roll their own data center?

VSAN Sizing and Design

No matter which route you pick – VSAN Ready or BYO – it would be wise to review the VMware Virtual SAN Design and Sizing Guide that was written by Rawlinson Riveria. It contains handy details needed to correctly size out the components for your workloads to ensure they are best served by the VSAN solution. The guide includes details on:

  • Disk Groups
  • Virtual SAN Datastore
  • Objects and Components
  • Number of Failures to Tolerate
  • Design Considerations
  • Size-Calculating Formulas
Some snazzy sizing graphics
Some snazzy sizing graphics

What Hosts Can Use the Virtual SAN?

There were several questions around what hosts can use the VSAN storage. It pretty much boils down to this:

  • VSAN is licensed for an entire vSphere cluster
  • Any device (SSD or HDD) used by VSAN cannot be used for other purposes (boot drive, Host Cache, vFRC, other partitions, etc.)
  • Not every host within the VSAN cluster needs to contribute storage to the VSAN storage (but VMware recommends it)
  • Only hosts within the VSAN cluster can consume storage from the VSAN storage

I would imagine that deploying some sort of virtual machine that presents storage (OpenFiler, FreeNAS, etc.) on top of the VSAN cluster could circumvent the limitation around storage presentation, but would offer sub-par performance and be an unsupported configuration.

Let’s continue along with pricing and thoughts on page 2 …