Over the years, the phrase “jumbo frames” has evoked some gut wrenching negativity from various IT professionals who have dealt with frame fragmentation and configuration drift. For those new to the concept, a jumbo frame is any frame with a payload larger than 1500 bytes. The payload size is called the MTU or Maximum Transmission Unit.
In theory, the basic idea is to spend less time crafting frames and more time sending data. In reality, finding all of the network components along an end-to-end delivery of frames and tweaking their MTU value is anywhere from trivial to impossible. However, it appears that the winds of change are shifting in favor of jumbo frames through the vehicle of network virtualization.
Network virtualization, sometimes referred to as an overlay network, allows network engineers to create logical networks on top of an existing infrastructure, the underlay network. It’s a bit like purchasing a plane ticket to take you from one city to the next. The airline has no real idea where you’re headed once you arrive at your destination airport, they just take you from point A to point B.
There’s a cost to an overlay network: encapsulation overhead. Essentially, these are the identifiers used for the logical overlay network to determine your source and destination addresses. The overhead quantity and descriptors are unique to the protocol used. VXLAN, for example, is set up a bit different from NVGRE. Both, however, will require tweaking your MTU value at various points in the network.
This means that you’re eventually going to have to face the task of adjusting MTU settings on your physical and virtual network configurations at some point in the future. The alternative is to allow frame fragmentation, often represented as the DF (don’t fragment) flag, which eats up CPU cycles on your switch as it crunches down oversized frames into smaller ones.
[symple_box color=”yellow” fade_in=”false” float=”center” text_align=”left” width=””]Note: Although frame fragmentation is the responsibility of the router in IPv4, this changes to the traffic source in IPv6. I’d suggest reading Ed Horley’s book Practical IPv6 for Windows Administrators.[/symple_box]
It Gets Easier
Historically, the MTU value for a switch was configured at a system level. However, many network devices give much more granular control over MTU sizes, often times bundling them along with various Quality of Service / Class of Service (QoS / CoS) configuration details. This may end up reducing the headache and ease the transition into an encapsulation friendly underlay network.
Keep in mind, also, that the target for most all encapsulation protocols is an MTU of 1600. This is still considered a jumbo frame, since the value is above 1500 bytes, but is less than the often maximum value of 9000 or 9216 that can wreak havoc on receive queues chunks in software or hardware. Although, if you want to take advantage of really large jumbo frames for vMotion, IP storage traffic, or some other pet project, this would be a good time to plan for that, too.