DNS Manager

Advantages of Mounting NFS Storage with a FQDN

confusion-atI tend to lurk on the VMTN networking forums and swoop in on occasion. One question that crops up a fair bit has to do with mounting NFS storage: should you the NFS server’s FQDN or IP address? In the past, I usually leaned towards the IP address to avoid any DNS silliness and less dependencies are always a plus. More recently, I’m shifting in favor of using the FQDN and abstracting the mount (and datastore UUID) away from IP addresses.

Take my latest project in the lab to migrate into a new subnet as an example scenario. This migration included my NAS arrays, which are mounted to the vSphere lab via FQDNs. I really enjoyed the flexibility to sneak in a few host file entries during the migration to avoid down time. Even if I were rather paranoid, I could just leave host file entries in play, although I don’t.

It’s ultimately up to you how you mount NFS storage, but if you are a fan of using the FQDN, here’s a walk through on how I performed the migration on my Synology boxes.

NAS Interfaces

My 2411+, 414slim, and ioSafe 1513+ all have multiple NICs that are bonded together using LACP. The bond itself can only have a single IP address configured. However, I can split the bond apart into separate links for the sake of migration and apply the new IP address onto one of the additional links.

Here’s an example of what that looks like:

Two Interfaces on my DS414slim
Two Interfaces on my DS414slim

If you have a storage array that supports VIFs (virtual interfaces) or LIFs (logical interfaces) you don’t need to fool around with physical links. Just apply the new IP and VLAN configuration to the VIF on your existing bond.

Networking Changes

ranicornIt’s also a good idea to check the physical switch config to ensure that the VLAN IDs are properly trunked. I did end up leaving the LACP configuration in place on the physical switch so that the bond would reform once I was done.

And don’t forget to update your virtual network, too. If the VLAN ID is changing, you’ll need a new port group with the new VLAN ID.

Host Migrations

Assuming that the NFS storage array is now accessible via multiple IP addresses on (optionally) multiple VLAN IDs, it’s time to start flipping hosts over to the new address. I use a rolling maintenance mode method to avoid any VM impacts. Here’s the steps I suggest using:

  • Put a host in maintenance mode.
  • Change the IP address of your NFS vmkernel interface into something that lives in the new subnet. This will disconnect your NFS sessions and the datastores will be unavailable for that host.
  • Migrate the NFS vmkernel interface into the new NFS port group.
  • Edit the etc/hosts file and insert an entry for the new IP address. This makes sure that no other hosts accidentally age out the DNS A record while you’re working and fail to connect to the new IP address.
  • Either reboot the host or re-mount the storage sessions (I prefer esxcfg-nas -r).
  • Validate connectivity to storage and that the NFS datastores once again show up as connected.
  • Exit maintenance mode.
  • Repeat from the beginning until all the hosts are updated.

If you’ve never edited the /etc/hosts file, it’s simple to do. Just SSH into a host and fire off a “vi /etc/hosts” command. Press i to switch to insert mode, and then add a line like shown below (with your IP address and FQDN):

Editing ETC Hosts
Editing ETC Hosts

When you’re done, press ESC followed by :wq (write and quit). Don’t mess around with the other lines; if you do by accident, press ESC followed by :q! (quit without saving).

Updating the DNS A Record

Now that every host is using the new IP address, update the A record on your DNS server for the NFS server. I usually have a long TTL (Time to Live) value on my A record for NFS storage because I want my hosts to check in very infrequently.

In Windows DNS Manager, this requires turning on the Advanced view.

DNS Manager
DNS Manager

In the example below, I have my NAS A record set to 30 days and 1 hour. I’d be surprised if I couldn’t get DNS services working in that amount of time. 🙂

Time To Live
Time To Live

I also suggest using reverse lookup zones and PTR records. It’s a good habit to get into. 🙂

PTR Records for NAS Boxes
PTR Records for NAS Boxes

Thoughts and Cleanup

That’s pretty much it. Don’t forget to delete the /etc/hosts entries at some point in the near future. If you don’t want to muck about with splitting the bonded interfaces on a Synology, you could also just power down all the VMs and adjust all the IP configurations during a maintenance window. I look forward to a day when Synology natively supports VIFs.