As the price of Solid State Drives (SSDs) continue to plummet, I tend to see more chatter among the Internet tubes that focus on what type of drive to buy in a home lab scenario. The main contenders are consumer grade SSDs, such as the Kingston SSDNow or other cheaper MLC flash disks. These products offer a somewhat attractive price per GB for hot data consumed by virtual machines and virtual appliances requiring a modest amount of footprint. These drives are typically too small to handle any sort of cold storage, such as files, movies, and other large format media that are very well served by capacity focused spinning disk.
So, which do you go with?
Contents
The Container Approach
This is a bit more of a legacy design that uses containers of storage, sometimes called silos of storage, to meet your goals. You will create individual tiers of storage based on their speed and other characteristics. This could mean an entire array filled with flash, and another filled with spinning disk. It could also mean a single array that has a mixture of some flash and some spinning disk with disparate storage groups for each. The flash storage will typically be high speed but low capacity, and spinning disk the opposite.
It wouldn’t make sense to mix the two types as you wouldn’t have control over which disk served the IO – RAID is not built for mixed drive types, even if we were talking about SATA and SAS. As workloads are created, you will ultimately be on the hook to place the workload on one storage container or another, but not both. This can be a challenge when you find yourself running out of flash capacity but still need performance.
[symple_column size=”one-half” position=”first”]
[symple_highlight color=”green”]Advantages[/symple_highlight]
- If you have an older storage array and wish to purchase a new one (for new features or size), repurpose the old one for spinning disk capacity
- Grow out just capacity or just performance as needed
- Some folks may need (or just like) having isolated storage groups for dev/test and production
[/symple_column]
[symple_highlight color=”red”]Disadvantages[/symple_highlight]
- Expensive initial investment of hardware
- N+1 redundancy (minimum) required in two places
- Multiple points of management and scale
- Not a good fit for workloads that need both high capacity and high performance
[symple_clear_floats]

The Hybrid and Server-Side Cache Approach
Another method for designing storage for your home lab storage is to go with a hybrid approach. This uses both flash and spinning disk in some sort of meaningful combination. Typically, the flash drives will be used as a cache layer, as is the case with Synology’s SSD Read Cache technology. There’s also the choice of putting flash into your vSphere hosts and taking advantage of server-side caching with vSphere Flash Read Cache, PernixData FVP, or SanDisk FlashSoft for VMware.


The Synology approach is rather costly, since burning up two slots for an SSD Read Cache sort of forces you into buying a larger array. It also means you’ll need one of their beefier arrays. However, if you have a large number of hosts in your home lab, it may be cheaper than equipping them all with local flash drives – although if you are in this boat you probably can afford whatever you want. 😉
[symple_column size=”one-half” position=”first”]
[symple_highlight color=”green”]Advantages[/symple_highlight]
- Relatively low investment in the storage array for capacity
- Performance scales out as new hosts are brought online or as additional SSDs are added to the hosts
- Evaluation or NFR licensing eat away at much of the cost to implement this solution
- Provides a great balance of performance and capacity for the workloads
[/symple_column]
[symple_highlight color=”red”]Disadvantages[/symple_highlight]
- The Synology solution is relatively expensive for many smaller home labs
- Evaluation software may become tiresome to refresh or renew
- Cheap SSDs can quickly become a brick in some caching scenarios
[symple_clear_floats]
The Virtual Storage Array Approach
The final method is to use software to build your storage array. I’m not going to be brazen and call this software defined storage, although some vendors do and others do not. 😉
Your choices in this arena include the VMware Virtual SAN (VSAN) and HP’s StorVirtual VSA for the “distributed local storage” route or the Nexenta’s NexentaStor Community Edition for a dedicated storage array / virtual machine. All of these solutions leverage a mixture of drives in some configuration, although the NexentaStor has probably the most flexible arrangement of disk layouts to meet your needs.
[symple_column size=”one-half” position=”first”]
[symple_highlight color=”green”]Advantages[/symple_highlight]
- Can eliminate the need to have a storage array entirely (except in the case of a dedicated NexentaStor box).
- Easy to scale out more capacity or performance
[/symple_column]
[symple_highlight color=”red”]Disadvantages[/symple_highlight]
- Evaluation software may become tiresome to refresh or renew
- Requires a fair bit of spinning disks among all of your hosts to handle write IO

Thoughts
Have you reviewed what you are actually going to run in your lab? Most folks have not, and end up starting with the hardware bits before a design has formed. Don’t fall into this trap – have a clear vision of what you want to test, tinker with, and ultimate put into production first. These items fall into the conceptual design. Take a step back and evaluate your requirements, constraints, and assumptions. This is a great habit to form now, when things are small and relatively easy, and can be applied to nearly any size design.
My lab runs a combination of architectures. I have a Synology DS411 with all spinning disk and a DS2411+ with all flash disk (the container approach). I also use PernixData FVP on all my hosts, which points back to the DS411 (the server-side caching approach). When time permits, I plan to test out the HP StorVirtual VSA and VMware Virtual SAN and have all three architectures in play. 😉
You can find a number of other lab resources here.