There’s a large quantity of legacy, monolithic storage arrays that still buzz along serving IO in a variety of data centers. These arrays were (hopefully) sized with a certain workload and growth in mind, and often continue to enjoy life due to slipping budgets or the heavy lifting required to forklift them out for either a new flavor of monolithic array or something with some flash pizzazz. I fondly recall hitting these sorts of caps, where a dual controller system would simply run out of gas and required a head swap, which is a fun way to spend a maintenance weekend.
Apparently I’ve been living under a rock all this time. Avere, one of the presenters at Storage Field Day 4, claims they are adept at hiding the dirty laundry behind the IO stream. And after spending a few hours with them, I’m honestly intrigued. Ron Bianchini, CEO, showcased a really slick “edge-core” NAS model. The core ends up being your existing arrays (some folks have many of them) or cloud storage, while the edge layer (which is the layer Avere provides a solution for) serves double duty: front-end storage virtualization to serve IO coupled with a serious amount of acceleration horsepower to make the data stream scream.
The crazy thing is that the performance acceleration is just one piece of the puzzle – imagine what can be done when remote IO can be served with local speeds. Then, add some software magic icing on top in the form of hit-less migrations (FlashMove) and seamless mirroring (FlashMirror).
NAS Performance Acceleration
To be fair, front-end storage virtualization is certainly nothing new. Hitachi VSP (Virtual Storage Platform) and NetApp V-Series certainly sits top of mind, but other solutions like EMC’s VPLEX and IBM’s SVC (SAN Volume Controller) are others of note. But these systems are typically there to help with scale-out, data migrations, or offer some journaling mechanism for metro-clustering with dual-active read/write volumes. They don’t focus on boosting performance.
Avere’s FXT series edge filer can front-end many different types of storage, including cloudy offerings: NetApp (NFS), EMC Isilon (NFS), EMC Atmos (NFS), Amazon (S3), Cleversafe (S3), and others as of the Avere AOS 4.0 release. The edge filer sits in the middle of the IO stream – it handles client requests from the servers directly. Each FXT edge filer also has a configurable quantity of compute and storage guts, such as NVRAM, RAM, and SSDs, that are used to accelerate IO that is moving back and forth from client to storage array. It’s more than a cache, however, as it runs a custom OS called Tiered File System (TFS) to promote, demote, and evict data from various tiers in the box.
The edge filer devices work in a scale-out clustered configuration and are available as a single namespace. Even if you are connected to a node that does not have the data you’re trying to read, another node in the cluster most likely does. The node that does have the data will do a cache-to-cache transaction to read the data. By making the design modular, an enterprise can now retain their back end core NAS filer and scale it for capacity, leaving the edge filer’s from Avere to tackle the performance requirements. I’ve embedded a video at the top of this post in which Mr. Bianchini goes quite deep into exactly how the clustered edge filers handle reads, writes, striping, and replication.
4,073 total views, 5 views today