I am frequently asked how my Synology NAS arrays are attached to the network in my lab. The answer is simple: LACP (except for one array that has a single link). The reason behind this configuration is that LACP tends to distribute load across member links evenly so long as a healthy handful of sessions are running over them. In my case, that session list is made up of about 20 different clients spread among ESXi hosts (NFS), workstations (SMB), and virtualized servers that access the NAS directly (SMB and NFS).
To be clear, I don’t use LACP on my ESXi hosts for storage. It would not provide any benefit unless I created VIFs on the storage arrays, which is also not possible using the standard Synology admin interface. It can be done with other workarounds, but I don’t treat my storage like a science project. 🙂
In this post, I’ll share the LACP configurations across my two lab switches – an HP V1910-24G and Cisco SG300-52 – for my three Synology NAS arrays that have a dynamic LAG (Link Aggregation Group) configured. I’ll warn you that each vendor has a variety of terms used here – BAGG, Bond, LAG, Port Channel, Channel Group – and I’ve tried to include them in the correct context for those following along with their own configurations.
HP V1910-24G Configuration
The HP switch is not very fun to configure via SSH, so I lean towards the GUI (which likes IE more than Chrome, it seems). I’m also much more comfortable with a Cisco IOS-like command structure, and the comware CLI is sad panda.
There’s only one storage LAG (BAGG or Bridge Aggregation) configured on the switch for an array called NAS3, which is a Synology DS414slim.
Here’s the Synology network view of things, showing two links connected as part of Bond 1:
Cisco SG300-52 Configuration
The Cisco configuration is pretty basic and, unlike the HP switch, I prefer to use SSH over the GUI. Each switchport and port channel has a description with the NAS name, and I try to make the port channel ID match the NAS number.
Here’s an example of my port channel configuration for NAS4, which is an ioSafe 1513+.
Core2#sh run int gi 21-22, gi 45-46 interface gigabitethernet21 description NAS4 spanning-tree portfast channel-group 4 mode auto ! interface gigabitethernet22 description NAS4 spanning-tree portfast channel-group 4 mode auto ! interface gigabitethernet45 description NAS4 spanning-tree portfast channel-group 4 mode auto ! interface gigabitethernet46 description NAS4 spanning-tree portfast channel-group 4 mode auto ! Core2#sh run int po4 interface Port-channel4 description NAS4 spanning-tree portfast switchport trunk allowed vlan add 40 ! Core2#
The ioSafe network is configured as shown below:
There’s another array attached to the Cisco switch called NAS2 which is a Synology DS2411+. Pretty much the same config, but less links.
Core2#sh run int gi 24, gi 48, po2 interface gigabitethernet24 description NAS2 spanning-tree portfast channel-group 2 mode auto ! interface gigabitethernet48 description NAS2 spanning-tree portfast channel-group 2 mode auto ! interface Port-channel2 description NAS2 spanning-tree portfast switchport trunk allowed vlan add 40 ! Core2#
Pretty simple, right? Let me know if there are questions or I can provide some more details on the configurations.