Synology DS411 vSphere Home Lab Storage – Protocol Bakeoff

If you’re an owner of a Synology NAS array, you might find some value in this brief posting. I recently decided to do a bakeoff on my own DS411 array (a 4 bay enclosure) to find out which protocol reigned supreme when it came to presenting storage to my vSphere home lab: iSCSI or NFS?

So, I wiped off all data on the array completely and formatted it in two ways: as an iSCSI Single LUN (Block-Level) and as a single Volume with an NFS export. The Synology NAS box was populated with 4x 120 GB SSD drives in a RAID-5 configuration.

Benchmark Testing

In order to get some valid statistics out of the array, I fired up a VMware IO Analyzer and ran a series of random workloads. I’m not really interested in sequential workloads, as they are going to be rare in the grand scheme of things in my lab (or really with most any VMware workload out there, many VMs with IO != sequential).

  1. 4KB 100% Random 100% Read
  2. 4KB 100% Random 0% Read
  3. 512KB 0% Random 100% Read (Throughput Test)

Bring on the IOPS!

Performance

Here is a breakdown of results by transport protocol. Each test was ran for 5 minutes and repeated for accuracy.

iSCSI Statistics

I was honestly surprised with how badly the Synology handled the write workload over iSCSI. It appears that LIO_iblock process is the bottleneck, as it consumes about 70% CPU (with the remainder taken up by raid-5 process and other activities).

  1. 4KB 100% Random 100% Read – 5122 IOPS, 4 ms average latency
  2. 4KB 100% Random 0% Read – 681 IOPS, 28 ms average latency (Note: Appears to be a CPU bottleneck on array, high latency)
  3. 512KB 0% Random 100% Read (Throughput Test) – 98.14 MBps

Even the read IOPS weren’t that spectacular, considering there are 3 SSD drives that can feed read information down the pipe.

NFS Statistics

NFS ran like a champ. I saw 3 to 4 nfsd process threads spawn while doing the tests. Here’s a chart of the latency during a 100% random write test!

All of the tests did really well and made the SSD shine:

  1. 4KB 100% Random 100% Read – 8750 IOPS, 3 ms average latency
  2. 4KB 100% Random 0% Read – 4604 IOPS, 1 ms average latency
  3. 512KB 0% Random 100% Read (Throughput Test) – 109.23 MBps

Here’s a look at the IOmeter and Synology CPU Usage during the 100% random write test.

A healthy amount of IOPS for a 100% random write test!

The DS411 divides and conquers the NFS workload

Thoughts

I’ve always used NFS on the Synology DS411, and decided to give iSCSI a whirl. Because I have a rather low end model, I am limited on array resources (CPU and memory). While Memory is never an issue (it usually tickles down in the 10-20% range of used RAM) the CPU is quite taxed with iSCSI. NFS is the clear choice for presenting storage for VMware workloads.