The world of “software-defined storage” officially gains another player today as Maxta formally launches their company along side the flagship Maxta Storage Platform (MxSP), already Generally Available (GA). Maxta was at VMworld 2013 in San Francisco, although somehow I managed to miss them amid the massive quantity of vendors on the floor. Fortunately, I spent time catching up with Yoram Novick, CEO and Founder, along with Amar Rao, VP of Business Development and OEM Sales, to go deeper on their solution and try to pick their brains.
The idea is simple: leverage local spinning disk and flash at the computer layer to form a distributed storage target. Sound familiar? The idea is similar to VMware Virtual SAN, which is the VMware specific product that builds a storage pool from many local storage devices at the compute layer. However, Maxta focuses on a number of improvements that may turn a head or two.
Or as Steve Thomsen puts it:
Very cool company about to step out of stealth mode called Maxta (sw defined storage). Think VSA meets Tintri/Tegile/Nimble.
— Steve (@sddc_steve) July 11, 2013
Contents
Maxta Storage Platform (MxSP)
MxSP is deployed via a virtual appliance to all of the vSphere hosts and is manged directly from the vSphere Client UI, not an external browser. Virtual machines are provided access to local storage by way of the Maxta Global Namespace. Virtual machine workloads that write or read from the storage pool are actually chatting with the local MxSP virtual appliance in a sort of client-server architecture, where the MxSP is the server and the virtual workload is the client.
When a virtual workload sends a read request, the local MxSP appliance looks to see if it has the data on the local storage. If it does not have the required bits for a read, it requests the data from an adjacent MxSP on behalf of the virtual machine. In this case, the local MxSP becomes a client and the remote MxSP is the server.
You can see the MxSP nodes in the architecture below highlighted in green. This is a bit different from some architectures that have a “master” appliance that handles all read and write requests and is located on a specific compute node, but is not as seamless as being able to chat directly with the hypervisor.
Each compute node can house a variety of spinning disk or flash devices (SSDs or PCIe flash) to contribute to the Maxta Aggregated Storage Pool. There’s even an option to ditch spinning disk and use all flash devices. The MxSP appliance uses NFS to chat back and forth with the hypervisor, but since that communication is bound to the host there is no need to architect the physical network to accommodate NFS traffic. It also means that most all of the annoyances that come with NFS, such as the lack of ability to use multipathing (session trunking) with vSphere, are inconsequential. The MxSP appliance is talking over the internal virtual switch.
Maxta treats each VM as an object rather than worrying about LUNs or other such constructs. Each VM is protected with at least two copies of the data existing somewhere in the cluster, with a focus on keeping one of those copies local for performance reasons. The Maxta solution also offers the ability to perform an “unlimited” amount of space efficient VM snapshots and clones.
Scale Out Options
Maxta has a very impressive amount of flexibility to help customers scale out. They currently state that the Maxta Aggregated Storage Pool can grow to any cluster size supported by the hypervisor. For vSphere, that means 32 nodes. There are also options on how exactly you might want to grow:
- Scale with Compute Only servers – these servers do not contribute disk to the Maxta storage pool, but can consume storage, useful for situations where the existing pool can provide enough performance to justify adding only more compute.
- Scale with Compute + Storage servers – this is a server that also has storage to contribute to the Maxta storage pool.
As nodes are added to the Maxta cluster, many different decisions are made on the environment’s behalf to determine when to place copies of data on the new node. As you might imagine, a workload placed on the new node would most assuredly have data placed there for local access. But the Maxta cluster might also decide to move an existing copy of data to the new node if the existing nodes where reaching a high water mark on storage utilization. Either way, the virtualization administrator is not concerned with these choices – they occur in the background.
Maxta licenses the product based on storage capacity – meaning the spinning disk capacity – and is not concerned with the number of servers, sockets, VMs, or amount of flash (unless you are only using flash drives for your cluster). This lets you right-size your compute layer to meet your needs.
Thoughts
While we only had a short period to chat, I really enjoyed the conversation with the Maxta team and could tell that Mr. Novick was really passionate about the topic of software-defined storage. On the road map include the ability to use KVM as your hypervisor (this is in Limited Availability today) and considerations for other hypervisors in the near future as engineering time and customer demand warrant.
Don’t just take my word for it – Marcel van den Berg over at UP2V has a very in depth write-up on Maxta worth checking out.