It’s interesting to watch the position of flash technology among various vendor stacks, along with the messaging and goals. One of the final presentations at Storage Field Day 3 was by a well known storage company: NetApp. This is a nice change of pace from a partial week spent with a heavy presence of start-up companies that are looking to change the world with radical new designs, and I found myself really looking forward to seeing exactly how NetApp wanted to improve their OnTAP design with flash.
There are many places you can put flash to accelerate performance: in the server for server-side caching, in the array to provide write-back or write-through caching, or even the creation of an entirely flash based array. NetApp seems to believe that they can offer solutions in all of these places – but is this reality?
Note: All travel and incidentals were paid for by Gestalt IT to attend Storage Field Day 3. No other compensation was given.
Creating a Hybrid Flash Array
NetApp has enjoyed a lot of success with their use of Flash Cache technology. This is the process of putting a PCIe card inside of a traditional FAS storage array and using it to cache hot read data, thus alleviating the amount of spindle activity necessary to serve up read IOs. This has an added bonus of giving those same spindles additional time to serve up write IOs. As a former NetApp customer, I definitely used Flash Cache in my arrays to offload 20%+ of my spindle activity to the cache.
What I didn’t realize until the Storage Field Day 3 presentation was that the technology uses a first-in-first-out (FIFO) eviction method – I’m still not entirely sure why this was chosen over something like least-frequently-read (LFR or sometimes called LFU for “used”), as FIFO means that a frequently read “hot” block could be ejected to make room for a less important block. However, the hot ejected block would end up coming back into cache on the next read and start the whole cycle over again.
One could argue if this is truly a hybrid flash array. I’d tend to think that NetApp would lean towards “no” as it leads towards their next topic – accelerating storage pools with SSD.
True Hybrid with Flash Pools
Flash Pool is the new slick term for NetApp’s ability to front a pool of spinning disk with SSDs. In a nutshell, the storage engineer must create a storage pool that contains a RAID group of SSDs, which then perform caching duties for all of the volumes within that pool made up of spinning disk. The goal of Flash Pool is to soak up both random reads and random writes, which are brutal on a spinning disk because they have an actual hardware arm that must seek out data on the platter. Sequential writes are sent to spinning disk. NetApp supports a variety of SSD capacities (100, 200, and 800 GB) to help properly size your Flash Pool to what type of workload you are caching.
Because the cache data lives on an actual SSD drive, the information survives a controller (head) failure or restart. I think this is an important point to make because many of the folks who use NetApp storage are aware that the NVRAM is mirrored between controllers, effectively limiting usage to half of the available NVRAM in order to protect the data. Thankfully, this is not the case with the SSDs inside of a Flash Pool.
I will call shenanigans on the other marketing term for the Flash Pool, which is “Virtual Storage Tiering”. I get the reason behind it (gotta check the Tiering box somewhere) but honestly, the idea of traditional tiering of data seems so old school. I’d definitely advise listening to storage Jedi Howard Marks chat about this in more detail.
Server-Side Caching with Flash Accel
Another bit of NetApp’s flash story revolves around server-side caching. I’m very happy to see so much energy around this type of technology, as I had already heard from both PernixData and SanDisk on this very same topic. In the case of NetApp, Flash Accel is being offered for free to existing NetApp customers. Just pop in an SSD or PCIe flash card and off you go – this sounds great, right?
Woah, slow down there my friend. Flash Accel is only specifically supported with VMware vSphere 5.0 on VM guests running Windows Server 2008 R2. Now, that’s not too bad for a first release, as both the hypervisor and guest OS supported are very, very popular. If you don’t believe me, go look at vOpenData showing 26.3% of all environments uploaded running Server 2008 R2. However, the really bad news is that you have to install a host agent in all of your hypervisors, a guest agent in every VM, and then a Flash Accel management console.
As an industry, we’ve spent a lot of our time trying to remove agents from the guest (think antivirus for starters). I think this will be a hard limit, or at least a hard sell, for many administrators in the field. Time will tell.
An interesting bit of tech called the EF540 was shown towards the end of the session, which is NetApp’s all flash array offering. Rather than using OnTAP, this system runs SANtricity OS. This makes sense to me – OnTAP really wasn’t designed for handling an array composed of all SSDs. The system promises to provide around 300,000 IOPS at 1ms of latency when benched using a 4KB, 100% random 100% read test. No other bench performances were given for a real world scenario (100% reads is a very silly bench), which was disappointing.
The enclosure is a 2U, 24-drive shelf system made up of 800 GB SSDs. It also allows the consumer to pick from a variety of connection types – FC (standard), SAS, iSCSI, or InfiniBand (IB). I found the canned video demo of SANtricity to reveal that the interface looks rather complicated and clunky. You can make the judgement for yourself by watching the video below.
Unfortunately, the section devoted to FlashRay last all of about 5 minutes and was limited to purely marketing ideals of what an all flash array should be.
The technologies behind Flash Cache and Flash Pools sound like an easy win for the enterprise data center consumer. Flash Cache has been proven over the years to be a big win for a lot of environments (including mine) and Flash Pools is a nifty idea that helps extend the life of OnTAP into one that handles mixed storage device types. However, Flash Accel needs to get rid of the agent requirement and have support for a wider variety of guest OS types (including Linux) before it will gain relevance in the market.
The EF540 and FlashRay should have some relatively early success in the market due to having an enterprise storage company’s name on the box, but I wonder how they stand up to the more modern all flash architectures that have been baking over the past 1-3 years (Pure Storage and XtremeIO comes to mind).
988 total views, 10 views today