To begin with, as I alluded in the title, the 2 TB limitation on a single VMDK and virtual mode RDM file with vSphere 5.1 has been increased to 62 TB in vSphere 5.5. Wow! To be perfectly frank, the 2 TB limitation was a serious pain in my hindquarters, as it severely limited options for larger sized databases, file servers, or just about any other VM that needed a single volume of greater than 2 TB.
While it’s true that you could present storage using tricks like the Windows in-guest iSCSI initiator, a physical mode RDM, or simply used a guest concatenator to cobble together multiple volumes, none of these were extremely elegant or highly desired by myself or others.
Here’s a few caveats:
- If you want to take advantage of the new 62 TB limitation (for both VMDKs and virtual mode RDMs) you will need to make sure that you’re using ESXi 5.5.
- The size increase affects both VMFS and NFS. Yup, if you didn’t know that already, vSphere would not let you create a vDisk beyond 2 TB even on an NFS datastore in vSphere 5.1 and older.
- The VMDK must be offline to expand beyond 2 TB – there is not yet support for a hot-grow at this time.
You also won’t be able to grow or manipulate a greater than 2 TB VMDK from within the legacy vSphere Client (C# Client). In fact, here’s the error you’ll see:

16GB End-to-End Fiber Channel Support
For those on the bleeding edge of SAN switching speeds, you can now use a 16 GB SFP into your server’s HBA and be fully supported. This is a good move to help keep vSphere 5.5 future proof (for a little while longer) but, in reality, I still see a fair bit of 4 GB FC connectors, and those with 8 GB are usually nowhere near saturating it. I’m sure a small segment of folks will celebrate over this, but it’s mostly something that will become valuable in 2014 and beyond.
Round Robin Pathing with Microsoft Cluster Services
For those who have built MSCS (or the more modern MSFC) on vSphere, you’ll probably recall that your storage path options are typically Fixed with NMP (the vSphere native multipathing plugin) or the use of a 3rd party plugin, such as EMC’s PowerPath Virtual Edition (PP/VE). With vSphere 5.5, you can now also use Round Robin as a supported policy. Support also includes FCoE and iSCSI.
With the right I/O profile, this can mean a higher quantity of throughput because multiple paths can issue I/O. For example, if you set the outstanding I/O quantity to “1”, you can force the hypervisor to send a single I/O down each path in a round robin sequence. This is a good way to saturate the paths in a more even-handed manner.
Permanent Device Loss (PDL) AutoRemove
No one likes a PDL situation. It typically causes the host to freak out, at least for a little while, and in rare cases is the underlying cause of a PSOD (Purple Screen of Death, the vSphere version of a Windows Blue Screen of Death). In vSphere 5.5, VMware continues to try and battle the harmful affects of PDL by adding in PDL AutoRemove, which will automatically remove a device with PDL from the host. This frees up one of the 256 devices per host, which is of value to a few folks that I’ve worked with that just seem to have a ton of devices connected to their hosts.
Personally, PDL and APD (All Paths Down) are some of the most critical dangers faced by most vSphere administrators, and anything VMware can do to help remediate these issues is a plus.
VMFS Heap Improvements
The final improvement I’ll discuss relates to the VMFS Heap size. If you’ll recall, this was a serious issue when your environment had to deal with a large open files (over 30 TB). Check out this article from Cormac Hogan, one of the top minds in the VMFS space, for more details. With vSphere 5.5, there is an improved heap eviction process, meaning less memory is used (256 MB) to address an entire 64 TB VMFS volume.
This one caught a few folks with their pants down. Although there were some patches available for ESXi 5.0 and 5.1, it’s good to see that this one is being put to bed for good in 5.5.