One of the handy new features introduced with Synology’s DSM 5.1 release was an ESXi plugin for VAAI NAS. This lets you take advantage of Full File Clone and Reserve Space (thick provisioning) for NFS datastores mounted to vSphere. I’ve had a few questions come in around this feature and how to validate that it’s working properly. For all of my fellow NFS geeks out there, this one’s for you.
Note: For all of the VAAI NAS primitives, refer to page 5 of this VAAI white paper.
Install The VAAI NAS Plugin
VAAI NAS requires the use of a plugin regardless of the vendor. You can snag the one from Synology here.
From there, either push the offline bundle to your hosts with VUM or use whatever other method makes sense for you (esxcli, Image Builder, etc.).
Validating VAAI NAS Status
There’s several methods to validate that VAAI NAS is active. Look to see if Hardware Acceleration is Supported on your NFS devices via esxcli.
esxcli storage nfs list
You can also view the Web Client to check on all hosts that have the NAS volume mounted. It’s hiding a bit, so here’s the treasure map:
- Select your NAS datastore.
- Click Manage.
- Click Settings.
- Click General.
- Arrr, here be the booty! The Hardware Acceleration should say Supported on all hosts. Otherwise, expand it to see which hosts do and do not have support.
- View the list of hosts. If only some hosts show Not Supported, check to make sure they have the VAAI NAS plugin installed. If all of the hosts show Not Supported, verify that your NAS is running DSM 5.1 and is able to provide VAAI NAS primitives.
Is VAAI NAS Really Working?
Good question. The most obvious way to tell is to try cloning a VM within the same NAS, which should trigger the Full File Clone primitive. This is similar to XCOPY for block storage, which I discuss in this block VAAI primitives post. You’re essentially telling the NAS to make a file copy on your host’s behalf, resulting in a very small trickle of network traffic for control activities and operational status messages.
Another method to see under the covers is to grep the vpxa.log file for the VAAI NAS plugin. In the case of Synology, it’s called SynologyNasPlugin. I usually use this specific command structure:
tail -f /var/log/vpxa.log | grep -i SynologyNasPlugin
The tail command is handy for looking at the last part of a file. The -f argument specifies that I want to actively watch for new changes. Piping it into grep gives me a way to filter out anything that doesn’t contain the string SynologyNasPlugin. The -i part ignores string case (i.e. case insensitive matches).
Additionally, there’s almost zero NFS traffic traversing the wire during the clone job. I’ve highlighted my storage-facing NICs in yellow while the clone was about 50% completed.
There’s under 1 Mbps of data traffic on vmk4. Keep in mind this host is also running 6 other VMs.
And here is what the Synology reports for that time period. The first two charts show disk reads and writes, while the third chart shows network traffic. I’ve highlighted the full file clone in orange.
The other new option is the Reserve Space primitive, which lets you thick provision your disks on NFS. This one is pretty easy to spot – when you attempt to clone or build a new VM on the NFS datastore, the options for eager and lazy thick will now be available.
It’s a bit annoying for me because I don’t thick provision anything in the lab and Thin Provision used to be the default selection. 🙂
Pretty simple, right? I am finding that my clone operations on the DS2411+ went from 10-15 minutes without the plugin to under 2 minutes with the plugin. I haven’t tested it out much on my DS414slim yet, since that really just runs my management VMs. I would assume the results would be similar.