Configuring Jumbo Frames on a VMware Distributed Switch

Jumbo frames can be a bit frustrating for many as it requires building a vmknic in the line command; there is no GUI method to set the MTU on a vmknic. However, for quite some time I’ve been unable to set the MTU directly using the console of an ESXi host (since 4.0). The error returned was:

Can not specify the dvsName, dvportId parameters for –add operation.

Many others have encountered this frustrating error.

I wanted to share an alternate method for getting through this problem for those that might still be having issues and want to use jumbo frames on a vDS. Essentially you use a vSwitch as a workshop to create a vmknic with a 9000 MTU and then migrate it over to your vDS.

Creating the standard virtual switch

The first step is to create a vSwitch for your vmknic to temporarily live on. Since the vmknic step requires line commands, I’ll go through how to make a vSwitch using line commands too.

If you don’t already have a vSwitch created, run the command:

esxcfg-vswitch -a <vswitch name>

To create a port group (required to add a vmkernel):

 esxcfg-vswitch -A <portgroup name> <vswitch name>

Optional: If you want to use a vSwitch with jumbo frames, here are the commands:

Set the MTU to 9000 with this command:

esxcfg-vswitch -m 9000 <vswitch name>

To see the MTU for each vSwitch, you can run a list command:

~ # esxcfg-vswitch -l
Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
WahlNetwork      128         1           128               9000

Easy, right? 🙂

Creating the vmknic

The next step is to create a vmknic on the standard vSwitch with an MTU of 9000.

The command structure is:

esxcfg-vmknic -a -i <ip> -n <netmask> -m 9000 -p <portgroup name>

With a result of:

Interface  Port Group/DVPort   IP Family IP Address                              Netmask         Broadcast       MAC Address       MTU     TSO MSS   Enabled Type
vmk4       Demo                IPv4      192.168.50.50                           255.255.255.0   192.168.50.255  00:50:56:77:ac:36 9000    65535     true    STATIC

Converting to a distributed switch

The final step is to convert the vSwitch into a vDS. This is done in the GUI. Make sure to create a vDS with a MTU of 9000 in the vCenter GUI (there’s actually no method to create a vDS in a line command at this time). The MTU value can be found in the properties of the vDS under the “advanced” settings.

Once set, begin the migration of the switch. Click on “Manage Virtual Adapters” of the vDS you wish to add the newly minted vmknic and then choose “Add”.

Next select the “Migrate existing virtual adapters” option.

Then choose the new vmknic (Virtual adapter) in the list and the port group to migrate to. In this example, I move my “Demo” port to the dv portgroup named “dvpg-Storage”.

Finally, review the changes and click Finish to migrate.

Creating the vmkernel directly in the vDS

If you want to give it a shot, you can use the following command to attempt to create a vmknic directly in the vDS. I’ve noticed it working since I upgraded my lab to ESXi 4.1u1, but it did not work for me in 4.1.

 esxcfg-vmknic -a -i #.#.#.# -n #.#.#.# -v <port-id> -s <dvs-name> -m 9000

The result should look something like this:

Interface  Port Group/DVPort   IP Family IP Address                              Netmask         Broadcast       MAC Address       MTU     TSO MSS   Enabled Type
vmk2       129                 IPv4      192.168.111.222                         255.255.255.0   192.168.111.255 00:50:56:79:97:75 9000    65535     true    STATIC

Storage & Network Requirements

This only covers the configuration required to get jumbo frames working from a vSphere perspective. You will still need to set up your storage device and network to handle jumbo frames.

Thoughts

I was discussing the use of jumbo frames with my local VCDX genius, Mike Laskowski, when we both recalled what a pain it was to set up jumbo frames on a vDS. I decided to try this again using the latest ESXi code and noticed that it worked (nice!). I really push for using a vDS for storage due the relatively new teaming policy called “Route based on physical NIC load”, which I feel is especially handy in consolidated environments using NFS IP storage (blade, 10GigE) or when you have a really heavy hitter VM chatting away.