The DS414slim NAS as vSphere Lab Storage – Protocols, Performance, and Thoughts

In a previous post, I walked through the unboxing and initial setup of a Synology DS414slim NAS. As promised, this post will focus entirely on performance, benchmarks, and thoughts around using this NAS in a virtualization lab.

Load testing the NAS and making the CPU weep
Load testing the NAS and making the CPU weep

Global Test Configuration

While the following NFS and iSCSI tests required some unique configuration to support the protocols for each, there are a number of global configuration items that did not change. These are as follows:

  • Network: Both of the 1 Gbps NICs on the NAS were bonded using LACP.
  • Network: All tests used a single VLAN across the hosts and NAS, negating any need for routing.
  • Network: Jumbo Frames, aka an MTU of 9000 bytes, was configured on the vSwitch, VMkernel ports, physical switch, and NAS. End-to-end jumbo frame connectivity was verified with a VMkping using a value of “8972” (to account for overhead) with frame fragmentation disabled.
  • Network: iSCSI uses VMkernel binding across two ESXi host interfaces.
  • Versions: Synology DSM 5.0-4493 Update 2 and ESXi 5.5 build 1881737.
  • Config: ATTO Disk Benchmark was installed on a Windows 2008 R2 server with 2 vCPUs and 8 GB of RAM. The primary partition was located on a separate array. The test partition was created as a 40 GB disk on the DS414slim using a separate VMware paravirtual SCSI device.
  • Config: I/O Analyzer version 1.6.0 was deployed on a separate array. The bundled secondary disk was destroyed and replaced with a 16 GB disk on the DS414slim using a VMware paravirtual SCSI device.

NFS Test Configuration

The DS414slim was configured as a single volume of 4 SSDs with a RAID 5 (3+1) set. This provided roughly 690.89 GB of usable space across the 256 GB SanDisk SDSSDHP256G (238.47 GB usable each) SSDs.

The RAID5 (3+1) volume details
The RAID5 (3+1) volume details

A single shared folder was made available via NFS and mounted to the ESXi hosts.

NFS Test Results

Let’s see how the DS414slim tackles a number of benchmarks using NFS.

  • ATTO Disk Benchmark: Total Length of 2 GB, Force Write Access Enabled, Direct I/O Enabled, Overlapped I/O Enabled, Queue Depth of 4
  • IO Analyzer: 4K, 8K, and 64K 100% Read 100% Random Tests; 4K, 8K, and 64K 100% Write 100% Random Tests.




iSCSI Test Configuration

For the iSCSI tests, I destroyed the NFS folder, deleted the volume, and created a single iSCSI block LUN. This offers the most advantageous performance on the NAS.

A single, block level iSCSI LUN
A single, block level iSCSI LUN

iSCSI Test Results

Here are the results from the tests using iSCSI.

  • ATTO Disk Benchmark: Total Length of 2 GB, Force Write Access Enabled, Direct I/O Enabled, Overlapped I/O Enabled, Queue Depth of 4
  • IO Analyzer: 4K, 8K, and 64K 100% Read 100% Random Tests; 4K, 8K, and 64K 100% Write 100% Random Tests.




Protocol Comparison

Here are a few more charts I created to compare the two protocols.


Thoughts

In the end, I think the pint-sized DS414slim has done well for what hardware is under the hood. Assuming you’re using VMs with a 4K block size, those using the NFS protocol can expect latency to be under 5ms with about 17~19 MBps of throughput either direction. iSCSI users will get similar latency but a slower speed of 11~12 MBps either direction.

Advantages

For those wishing to build a small virtualization lab, the DS414slim is incredibly quiet and compact. It’s roughly the length and width of a cell phone – although a bit wider than that – and has a mounting plate to sit upon. With a small collection of consumer-grade SSDs inside, you can crank out several thousand IOPS and a healthy amount of throughput.

A staggered view of the bays showing the clearance needed to service the rear of the unit
A staggered view of the bays showing the clearance needed to service the rear of the unit

This should be plenty to set up a lab environment with a dozen VMs for a domain controller, vCenter, a database server, and some workloads – just don’t go bananas with your capacity usage (I’d suggest thin provisioning). I also like that it has a pair of NICs – you can use this for playing around with LACP in your lab, or for learning the ropes with a round robin path selection policy (PSP). Lots of bells and whistles are also stuffed into DSM 5 that you can tinker with.

Disadvantages

The ARM based processor is absolutely the bottle neck for performance. This is rather obvious, even without benching the unit. You’re never going to unlock the true potential of your SSDs, so be cognizant of that fact before purchasing. However, I would absolutely recommend getting SSDs for any virtualization lab – unless you just need capacity to store files. Even though you cap out at about 4260 IOPS with NFS, it’s exponentially more performance than you can squeak out of a RAID5 (3+1) set of 2.5″ spinning disk (HDD).

Close up of the guts inside of the DS414slim
Close up of the guts inside of the DS414slim

I’ve found that the processor is very disappointing for highly random write tests at max thoughput. The flush process would clog up the write stream, forcing the CPU to 99% utilized as it tries to write to disk. If you have a workload that wants to push a few thousand random write IOs down to storage, don’t get this unit. Also, this unit does not support VAAI.

Bottom Line

I think for the price and guts, it does marvelous – I’ve really pushed the NAS well beyond its intended specs. 🙂

Many people have tiny, noise constrained home labs (some even in their bedrooms). For those folks, this is a solid choice.