Contents
Slick Software Tools
While there’s certainly a ton of other ways I could geek out over the underlying architecture of the Avere edge clusters, I’m going to muzzle myself on that a bit and focus on what I thought was a very interesting differentiator. To be fair, I sort of expect any next-gen storage system to be lightyears faster and significantly more efficient than more mature market offerings, else they have little chance of survival. 🙂
Avere has two really spiffy software offerings: FlashMove and FlashMirror.
FlashMove
Assuming that you’ve adopted the edge-core model and have deployed an FXT cluster somewhere, you may hit a point where you want to shuffle data between core arrays. This could be from a net-new array that you are deploying into your data center as a refresh effort, or simply as a migration effort to balance storage growth and/or archive data. FlashMove allows you to non-disruptively move a volume between core arrays. Avere calls this a “silvering process” in which any directory that has been copied over is silvered, and any directory that has not been copied over is unsilvered.

When a write is received, the system checks the directory to see if it has been silvered or not. If it is unsilvered – meaning the data has not yet been copied to the destination – the write is only sent to that source array. If the directory has been silvered, data is written to both the source array’s volume on the old array and the destination array’s volume. Thus we avoid a situation where the source array is missing files that only exist on the destination array – which is a situation I have encountered in the past. No one likes tackling a file system spaghetti!
FlashMirror
The other software solution is built on the same principals of FlashMove, but is used to protect data on another core array (such as a cloud provider or remote location). Once a volume is targeted for mirroring, the FXT edge filer will create dual write queues for both the primary and secondary core storage.

If the secondary core storage array is available over a WAN link that fails, the edge filer will continue to build up the write queue until it is exhausted. At that point, it will wait for the WAN link to become available, walk the directory on the secondary core storage array, and ensure that it is again carrying a copy of the primary data.
Thoughts
Avere really shines when it comes to providing a large quantity of value for a variety of use cases. One that really stood out for me was the situation where a client migrated his co-location of compute to Switch (SuperNAP) via a number of on-demand compute clouds. The client also placed a cluster of FXT edge filers at the site. When he needed a vast quantity of compute, he rented the required amount and used the FXT filers as if they were local storage to crunch on the data. When completed, he could spin down the compute. This seems like a no-brainer for when storage is required to be accessed over a latent pipe (such as for remote users or cloud storage) but still must perform like a beast.
As for using FXT for a virtualization focused design? I’m a bit on the fence – on the one hand, I do like NFS as a protocol and think that FlashMirror would make VM replication for DR a snap – I wouldn’t call this backup, though. On the other hand, players like SimpliVity already have a fleshed out federation in their DNA, and Zerto does a great job at per-VM replication regardless of whatever local arrays you end up purchasing. I suppose if you were trying to squeak out additional life on your legacy array, or offer data locality for a remote location (much like Riverbed’s Granite, but with real horsepower). Your thoughts?