I’ve long since selected Synology to be the cornerstone piece of storage hardware in my vSphere home lab, mainly on word of mouth recommendations from folks like Jason Nash, but also after a lengthy review of various products in the market. Although I’ve enjoyed my DS411 NAS, which can house four drives, I’ve been hitting a capacity vs performance wall. SSDs are still rather pricey with 120 GB being the winner of price per GB ratios during random web sales. I’ve been using four of these in a RAID 5 configuration, which is right around 300 GB of usable space; this isn’t much wiggle room. Alternatively, SATA drives come in two flavors: sluggish but big, and still-kinda-slow and expensive (10K RPM Raptors).
Now, don’t get me wrong – the DS411 has served me well and does a great job of hosting VMs for the lab. The addition of a fourth ESXi host and some more VDI lab work has pushed me into finding another solution.
Contents
NAS Design
Being already familiar and happy with my current Synology NAS, I set my target on the larger sized units that can hold 12 drives and have at least two NICs. Both the DS2411+ and the DS3612xs meet all of my requirements above, however, the DS3212xs was too expensive for my taste and most likely overkill. This left the DS2411+ as my ultimate choice.
My next decision revolved around how to populate the array – split between SATA and SSD or go all flash? I’m somewhat “done” with spinning disk. I don’t need massive capacity, I need performance. SATA drives are a poor fit for my specific purposes unless I wanted to run a large amount of them in some complex RAID configuration. As such, I focused on the most effective price-per-gigabyte (120 GB) SSD size in the market at this time and went forward with populating all twelve slots with flash whenever I could find a good deal.
This particular lab expansion was aided by the good folks at Xangati, and I greatly appreciate their sponsorship to the Wahl Network lab. I should try to find a bacon and/or Star Wars sticker and affix it to the NAS. 🙂
LUN Creation
After putting all twelve SSDs into the array and going through the standard DSM loading process, I had some choices to make. Synology offers a few different ways to present iSCSI LUNs – as regular files, as a single block, or as multiple blocks. I experimented with the various methods, and found that one of them supports all of the VAAI primitives: iSCSI LUN regular files.
iSCSI LUN (Regular Files)
This seemed like the best way to go forward. The use of a regular files LUN requires a volume to be created first, so I created a single, large volume of about 1.07 TB. I left a bit of room for some test volumes and any emergency expansions of my primary volume. The NAS has right around 1.28 TB of storage in total.
iSCSI LUN Properties
After selecting the first option to create an “iSCSI LUN (Regular Files)”, I was presented with some properties to configure. I made sure to enable both Thin Provisioning and VMware VAAI Support to leverage the advantages of the primitives and to also comply with the requirements needed to upgrade to vSphere 5.1 as per Kendrick’s blog. Although to be honest, I don’t know why I would set this to “No” 🙂
I carved up an iSCSI LUN of 800 GB and mapped it to a “Lab” iSCSI target. This is pretty much identical to how I did things on the DS411 and I typically like to present 80% of the total LUN to the target device when using thin provisioning.
VAAI Support
When I first created the LUN, I assumed it would only support the ATS (Atomic Test & Set) VAAI primitive. However, the host reported that it supports all of them. Below is an output from a host, and the bottom most NAA is the one for my LUN on the DS2411+.
esxcli storage core device vaai status get
I did try cloning a VM, and while it didn’t really finish all that quickly, no network traffic was sent down the wire. This leads me to believe that it is using the VAAI primitive, but lack of the Synology “XS” grade (the top end business line) of my DS2411+ results in a super speedy task completion time. In this particular instance, an 18GB VM clone took about 4 minutes to complete.
Performance
As expected, if given the chance the NAS will saturate pretty much anything it can prior to the disks unless doing a very specific load test. In a raw throughput test I was able to easily saturate the link aggregation group of 2 uplinks and achieved about 216 MBps of throughput.
Synology Graphs
One thing I really like about DSM 4.1 is the enhanced resource monitor. Below is a copy from an old datastore on my DS411 to the new DS2411+ as I migrated VMs over – I enjoyed seeing the major difference in network rate vs disk rate. In this particular case, the bottleneck was the old array – it couldn’t chug any more throughput out.
Additionally, I ran a mixed set of workloads: 4K OLTP, 64K SQL, and a random 4K 50% read/write simulation. The idea was to just hammer the heck out of the NAS to see what would happen. The array took care of business with reasonable latency. Nothing spectacular here, but it did fulfill the workload requests without any CPU or latency spikes.
IOMeter Graph
I ran one final test just to see how many IOPS I could push. With three IO Analyzers running on my hosts I saw roughly 20,000 IOPS. With just two running the IOPS were around 22,000. This number is overkill for anything I could really need on a normal day, but is lower than what I assumed would be the result. I attribute the IOPS to simple latency of pushing data over consumer 1GbE to all three hosts as the chart below shows about 4ms. I’ll continue to tune and tweak to get this number higher.
Thoughts
I’m really digging this new NAS and have so far piled 22 running VMs on it with various workloads (VDI, SQL, monitoring solutions, domain services, etc) with plenty of room left to spare. I’m also running vSphere 5.1 against it without any issues – including combing through the vmkernel.log file to look for warnings or errors. If you wanted, you could also take the drive bays in the DS2411+ and divide them into two volumes: one for SATA and another for flash. Perhaps one for VMs and another for file storage. I just decided to go for one big pool of performance tier disk for VMs.