The 11+1 RAID 5 set made up of SSDs

Upgrading The Home Lab to Synology’s Latest DSM 5.0

A few weeks ago, I took the time to perform some overdue maintenance that has been on my list. Specifically, upgrading the DSM (DiskStation Manager) that sits at the heart of both my Synology DS411 and DS2411+ arrays from version 4 to version 5. While you can perform the upgrade without interrupting the data services, I still shut down all my VMs before an upgrade to avoid additional risk. Feel free to cowboy the upgrade if you feel the need. 😉

Someone also asked why I don’t just migrate my workloads between the boxes. Storage vMotions are technically possible, but the performance difference between my 12 bay SSD fueled beast and my 4 bay HDD capacity array is staggering. Below is the layout of my DS2411 drives – just one large pool of disk.

The 11+1 RAID 5 set made up of SSDs
The 11+1 RAID 5 set made up of SSDs

In preparation for the upgrade, I went ahead and manually snagged all of the new DSM 5.0 files. This avoids any dependency on the Internet during my upgrade window, which is a habit I’ve fallen into after being burned by websites being down when I was ready to do updates.

[symple_box color=”blue” text_align=”left” width=”100%” float=”none”]Note: I always recommend downloading updates / upgrades prior to a maintenance window, no matter who’s software it is. This is an easy risk to avoid.[/symple_box]

Upgrade Walkthrough

Realistically, it was a rather boring upgrade. Which is perfect! Boring means no surprises.

Here’s a high level overview of the steps performed:

  • Copied the new DSM software to my workstation and ensured that nothing was talking to either of my Synology boxes.
  • Made a backup of both NAS boxes.
  • Restarted both Synology NAS boxes to ensure they could survive a restart without errors and would come back up healthy.
  • Pushed the code to both NAS boxes.
  • Drank a glass of water while waiting for about 10 minutes. Type of beverage is completely optional to your personal tastes.
  • Validated that the new code was loaded successfully by logging on and checking the DSM status.
  • Made a DSM backup of both NAS boxes.
  • Updated the application packages – specifically Perl and Cloud Station.

I did not encounter any errors of any sort, and was able to power back on all of my virtual machines immediately after the Synology arrays were back online.

Success!
Success!

I just wish all products upgraded this easily and smoothly. 🙂

Flashy Dashboard

The DSM 5.0 dashboard looks really crisp and colorful; it almost reminds me of an Ubuntu desktop. Many of the control panel objects now toggle up and down in an accordion style, giving you an overview of components but allowing you to drill down into the details. For example, here is a view showing the control panel in which all of the sub menus are off to the left, and I have expanded the Bond 1 interface on my 2411:

Playing peek-a-boo with components
Playing peek-a-boo with components

I spend the majority of my time in the Storage Manager. It has been completely revamped to provide a quick glimpse into the health of the array in the Overview screen. Below you can see an enormous green check and healthy status.

I think the System is healthy
I think the System is healthy

There’s also a data scrubbing feature that can be scheduled. This will let you select a disk group and kick off a job where DSM will find any inconsistencies (error checking). In addition, the Test Scheduler lets me set up S.M.A.R.T. tests – either quick or extended – to routinely check the health of the disks within my array. The test can be generated for all disks or just a subset. The task below will do a quick test on the first 5 disks.

A quick test to ensure everything is running smoothly
A quick test to ensure everything is running smoothly

Protocol Performance

I had several folks asking about performance with iSCSI on DSM 5. I created a virtual machine and mounted two virtual disks to it: one running via NFS (drive S) and another via a “normal files” iSCSI LUN (drive I) to my DS2411+ array. Using the ATTO Disk Benchmark tool, I ran the test using Direct I/O, Overlapped I/O, Queue Depth of 4.

Here’s a screenshot of iSCSI and NFS performance testing under DSM 5 with my DS2411+ all flash array.

iSCSI and NFS test with ATTO
iSCSI and NFS test with ATTO

Looking at the sweet spot – the 4K and 8K block sizes – performance via iSCSI was very reasonable for writes but lacking on reads. On average, iSCSI and NFS were within 10% of one another for writes – close enough to call it a wash in my mind. NFS did much better on reads, showing about 40% greater read performance than iSCSI.

Maybe this value would improve if I were able to wipe the array and create a block-level LUN? I’ll try it on my DS414slim when it arrives.

[symple_box color=”red” fade_in=”false” float=”center” text_align=”left” width=””]
Note: Keep in mind that this isn’t a protocol war test, it’s to see how the Synology box is able to handle iSCSI after the update to DSM 5, which has typically been underwhelming.
[/symple_box]

Thoughts

Over the next few weeks, I plan to explore the Synology Backups and Syslog features and how they can benefit the home lab environment. I’m specifically interested in using the backup for my virtual machines – will this possibly provide an extra layer of protection? Additionally, it would be nice to have a central repository for all my various logs sources: ESXi hosts, NSX Manager, Untangle, and others.