Nimble Storage’s New CS700 Adaptive Flash Array Gets Moar Power

Although I’ve been looking to use the phrase “moar power” in a blog post for quite some time (Home Improvement fans, anyone?), this particular update on Nimble Storage was a treat to write. Although I’m sure everyone is tired of labels, this “hybrid flash storage” vendor has done well in the market over the past several years. They seem to have a strong focus on the customer experience (InfoSight), the user interface, and making day-to-day operational efforts as minimal and invisible as possible. With that said, let’s check out some of the new digs that are bundled in with the adaptive flash array announcement that released today.

The New CS700

There’s a new kid on the block sporting a CASL (Cache Accelerated Sequential Layout) architecture: the CS700. This is the third series model to be made available by Nimble Storage, with the previous two being the CS200 and CS400 series arrays.

I’m not going to dig into rated IOPS values since, well … everyone’s read/write profile is different and the industry has generally settled on either 100% reads with 4K random blocks or something else that I think is equally silly. Let’s just say this thing does 6 figure IOPS and that your mileage may vary. 🙂

Hey, look, it's a storage array controller!
Hey, look, it’s a storage array controller!

This particular unit sports the new Intel Ivy Bridge processors, which unlocks a beefy plethora of improvements. This includes Intel’s Non-Transparent Bridge (NTB) technology, which creates a blazing fast interconnect between two Intel systems. This is a far cry more robust than earlier SAS connected systems and provides a streamlined method for sharing cache data between an active and passive controller node.

Below is an example diagram (source):

NTB is a good thing
NTB is a good thing

The CS700 controller also sports a number of other essential improvements over the models, such as: more DRAM; high-performance, triple parity RAID; and SAS connected SSDs. In a nutshell, CASL is now faster because the bottlenecks that existed in the compute hardware have been reduced.

An All-Flash Shelf Option

Clustering nodes creates a rather large layer of flash
Clustering nodes creates a rather large layer of flash

As a quick overview, a Nimble Storage array comes with 12 HDDs and 4 SSDs in the controller, along with 15 HDDs and 1 SSD in each shelf. The shelf SSD is mainly there for metadata. There was no option to increase the quantity of SSDs until today’s news of an All-Flash Shelf. I think this is really the meat and potatoes of today’s announcement.

Now, each controller unit can also have an All-Flash Shelf (AFS) to expand upon the default 4 SSDs for the flash pool. The AFS holds 16 SSDs and can be populated in groups of 4 SSDs at a time. This lets you start at 4 SSDs, add in the AFS, and then bump up to 8, 12, 16, or 20 total SSDs for your flash pool as demand requires it. The process to add SSDs is non-disruptive. When fully populated, you will have 12.8 TB of usable flash when populating the unit with 800 GB flash drives.

Because you can also cluster the Nimble Storage arrays, there’s the possibility of creating a 4-node cluster (which is the maximum) each with an AFS. This brings up the raw flash size to 64 TB (80 SSDs @ 800 GB each) or about 51 TB usable.

[symple_box color=”yellow” fade_in=”false” float=”center” text_align=”left” width=””]
Note: The All-Flash Shelf is different from an ES1 expansion shelf. The AFS is only used as part of the flash cache, while expansion shelves provide additional capacity.

IOPS Bling from Controller Upgrades

The interesting thing about CASL is that the flash devices really don’t provide any of the write performance benefits that are offered by the array. While many folks are probably used to the idea of writing to flash as a sort of “IO sponge” – which is ultimately emptied onto disk – the Nimble Storage architecture works a little differently. While it’s true that writes do hit NVRAM and are flushed down to disk, the flash devices are really just there to accelerate reads.

[symple_box color=”blue” fade_in=”false” float=”center” text_align=”left” width=””]
Interested in the write path? You can read more about this in the Hybrid Flash Array From Nimble Storage Makes iSCSI Sexy Again article

With the CS700, the effective IOPS that each HDD can handle sits at about 10K. These are the same HDDs used in the other boxes, or near enough that it doesn’t matter. So, how is that even possible? Since the CASL software is able to reduce the time it takes to write data to the disk by various means (faster CPU, Intel NTB, etc.), the quantity of IOPS goes up to a value that even most SSDs would be envious over.

Moving the bottleneck away from the compute layer results in greater IOPS
Moving the bottleneck away from the compute layer results in greater IOPS

I think it’s a rather clever way to get some impressive performance numbers out of spinning rust disks.


Long after my encounter at Tech Field Day, the architects at Nimble Storage continue to impress. Even their Proactive Wellness results, fueled by the InfoSight monitoring that pulls in a bajillion data points, is getting the nod from me with 99.999% uptime across their customer base, also known as “five nines.” Additionally, the team shared that over 90% of support cases are automatically detected and over 80% of them are resolved automatically. Skynet, anyone?

Since this particular briefing left me hungry for a bit more, I’ll be taking a deeper dive with a hands-on review of the new CS700 in a few weeks. I’ll make sure to share the link for that review via my various social media channels when it is published.