Nearly a year ago, I wrote a post detailing Three Example Home Lab Storage Designs using SSDs and Spinning Disk. The idea was to offer a myriad of “make your own cake” style recipes for creating and consuming storage resources. However, there still seems to be a little bit of confusion surrounding the use of SSDs for lab storage, especially when I wrote about the DS414slim with 4 SSDs. Essentially, people are disappointed that I was not able to achieve the total sum of 4 SSDs (or even 3 of them, being it was a RAID 5 set) when running through some benchmarks. And you know what?
That’s not the point.
It’s very difficult to glean all of the rated performance of flash devices from a storage array. The big guys do it by arming their boxes with a mixture of massive amounts of CPU, RAM, PCIe cards, and software wizardry that takes advantage of all these hardware resources. This is rarely an option in a home lab unless you’re Jason Boche or Michael Webster. 😉
Requirements or Bust
The reason I use, and encourage using, flash drives for a storage array that will be used with a virtualization lab is due to the nature of the workloads. Virtualization labs, regardless of hypervisor, require storage that can be used by virtual machines, not family photos or cat videos. Typically, these servers will be generating load through a number of factors: the operating system itself, applications that run on the OS, and various workloads placed on the applications. This translates into needing (in this order):
- High IOPS
- Low latency
- Reasonable throughput
- Reasonable capacity
I’ll leave heat, power, and cooling off the list. Those factors are not driven by the workload, but instead by the environment itself (and often show up in the form of constraints, not requirements).
Performance Trumps Capacity
In this use case, at least. The ability to provide a high amount of performance (in terms of IOPS and latency) is what we are looking for to satisfy our requirements. In other words, solid state drives (SSDs).
Our devil’s advocate decides to use four 2 TB SATA hard drives, or spinning disk, to power storage; roughly 5.7 TB of usable space.
However, most SATA disks can only put out somewhere in the neighborhood of 100-200 IOPS. They also puke on any sort of random IO streams, because a physical component (the drive head) has to seek out the location on a platter. So even if we cheated and just said 200 IOPS per disk times 3 non-parity disks = 600 IOPS with spinning disk, which isn’t realistic, the number is still very low and dependent on primarily sequential workloads. Even if you have some really killer secret sauce and a huge cache baked into the NAS that could absorb the IO blender, the performance would still be questionable under load because eventually the IO stream would overflow those tricks.
But let’s be generous. Let’s just say the NAS can indeed do 600 back-end IOPS even if the data stream was 100% 4K random writes. (Again, not realistic, I’m trying to be nice to the spinning disk). That’s 2.34ish MBps of writes if we assume that there is zero latency. After all, the write has to be acknowledged before it is considered complete.
Pretty much any NAS controller, in comparison, can easily handle 600 IOPS – the underlying operating system and threads will not be the bottleneck. The slow, mechanical, spinning disk will be the bottleneck. Your virtual machines, which are treating the NAS like a SCSI device, will be frustrated as their reads and writes sit bottled up in a small set of SATA drives, and sad pandas will abound. Lots of space, but no performance!
Where Did My SSD’s Performance Go?
If we replace those tired spinning disks with flash devices, what happens? The bottleneck moves! Instead of trying to tap dance with cement shoes, we’ve tied ocelots to our feet and are able to cut a rug like nobody’s business*. Except that those ocelots are stuffed inside of a NAS, which has suddenly become the bottleneck because it can only flush about 4000-5000 IOPS (depending on the read/write profile) to the SSDs.

Let’s assume you used 256 GB SSDs, which gives about 700 GB of usable capacity. You’re now able to provide 11% of the original capacity, but likely well over 1000% of the original performance. And that’s the rub – performance is what we needed, not capacity. This exercise used to be performed with adding more spinning disk – in fact, you’d define your needed IOPS first, then figure out how many drives it would take, and the answer would almost always be greater than the required capacity. Those days are, thankfully, coming to a close.
In essence, I don’t care where the SSD “rated” performance went. The retail box may say something like this drive can do 50K IOPS and you got really excited thinking you’d get 150K IOPS out of a set of them. However, that performance is typically rated for a single device attached via the PCI or SATA3 bus in your computer for direct consumption. That means a 1:1 relationship with a local device. The trouble with physics is that you can’t get around those pesky laws, and the further you place something away from a workload, the longer it’s going to take to send and receive data.
So remember – it’s about shifting the bottleneck.
Note
Like I wrote in my original post on home lab storage design, there are many other hybrid approaches that are perfectly valid. Server-side caching, using server memory itself, using a flash cache in the NAS, etc. Hopefully this post clarifies the advantages of using SSD for a virtualization home lab.
* The Wahl household does not condone or encourage tying cats to your body.