Although I was not able to attend, I did tune into the snazzy sessions at Storage Field Day 5. A few presentations sparked a lot of debate and snark over FUD (fear, uncertainty, and doubt) based on a few rough spots, along with the manner in which one should present themselves over social media.
While that makes great click bait for a blog post, I’m going to set aside that drivel and focus on the technology. Specifically, the mind expanding sessions that discussed using flash devices attached to a server DIMM as a sort of “big RAM stick” approach.
Let’s mull over two vendors that stick out from SFD5: Diablo Technologies presented on their Memory Channel Storage (MCS) design, and SanDisk presented on their weirdly named ULLtraDIMM hardware offering (which MCS uses). I’ll tip my hat to Justin Warren on his excellent background check posts written on Diablo and SanDisk. Quite frankly, his mix of technical and business prep posts are legendary and chocked full of value. Read them.
Contents
On With The Show
But I digress. Using flash in the memory tier of a hypervisor or bare metal operating system seems interesting on the outside. I can already see (in my head) how applications can be created as virtual machines on hosts with terabytes – or more – of flash powered RAM. Assuming that latency / performance is even close to memory speeds, it should be a rather beefy way to chuck a bunch of additional high-performance capacity into a server. And, once the price steps into line, something that will have some teeth in the market. In a similar vein, the price of a 32 GB DIMM is cost prohibitive for all but the most critical use cases in the vast number of designs I’m involved with.

But it almost feels like a step backwards in many use cases. Loading up servers with RAM and letting workloads chew up large quantities for a RAM-cache feels very legacy. Unless you’re working with an application that just absolutely must entertain near-zero latency for all transactions, there’s usually little need to load all of the working set into memory, right? We’re talking niche stuff, indeed.
Thus, I would imagine that databases and highly transactional workloads would be the initial target, with more mainstream folks who just want to “cram a bunch of VMs on a server” following after the price point is a little less insane. After all, the idea of having ESXi hosts that are CPU bound as a general rule sounds cool to me; we’re almost always RAM bound in today’s world.
FVP’s Distributed Fault Tolerant Memory and Infinio’s Accelerator
I will admit that I knew it was coming, but was still impressed to listen in on the Distributed Fault Tolerant Memory (DFTM) discussion by the folks at PernixData. You can peer over the new FVP goodies in this post written up by “the” Frank Denneman if you choose (recommended). The geek inside has always been a bit blown away by the idea of clustering RAM across hosts to create a new, Speedy Gonzales type of storage tier, especially after first seeing it offered by the friendly faces at Infinio Systems. It appears there are now two ways to do “server-side flash memory acceleration” (I need to come up with a better name for this) that use two completely different architectures.

The Infinio Accelerator uses a clever NFS traffic catching appliance to add some spice to your vSphere based NFS datastores, while FVP bakes in a VIB that operates a bit more mysteriously under the covers. I don’t mean this in a bad way, it’s just not very obvious what they are doing if you were to simply glance at the server. FVP will also do block based storage (VMFS) and now NFS.
This isn’t meant as a winners vs losers discussion. I think both approaches get my architect juices flowing – I now have two different widgets in my tool belt that can be used to solve problems. Great!
Final Thoughts
Elevating the conversation a bit, the interesting takeaways that I have derived are as follows:
- Flash is being used in some really wild and awesome ways. This is good.
- The use of a SAS or SATA like flash attachments seems to be too much of a bottleneck. DIMMs may be the way to solve that, assuming PCIe or NVMe doesn’t eat its lunch, first.
- You can put flash on a DIMM and – with a little bit of BIOS mojo – make it appear like RAM. And that’s good, too.
- Is it possible that in 3~5 years we’ll just use RAM for performance and SAN for capacity? (or go without SAN and use server-side storage or VSAs)
I don’t think this conversation ends with roll-your-own servers, either. The “hyperconverged” guys can also take advantage of these technologies to supersize their offerings. Would you like a large order of flash would that? Either way, this is some darn cool technology (flash on a DIMM and clustered RAM) and appears to compliment one another very nicely. The data center is looking more exotic every week. Or, at least … it will.