The question of reclaiming space on NFS based VMDKs popped up while at the Toronto VMUG User Conference. I thought it’d be worth a revisit to this topic – not just from a technical “how to” perspective, but also to discuss exactly what the question is all about, why it happens, and then how to fix it.
I’m most familiar with the vendor plug-in model, such as NetApp’s VSC, and have used that to shrink VMDK sprawl, but not everyone has access to tools like that. Thus, I’ll go over the tried and true method for reclaiming space from virtual machines that use NFS for storage.
Writing to Disk
Let’s first talk about how NFS disks grow. vSphere’s default behavior is to use thin provisioning for all VMDKs that sit on NFS storage. Without the aid of VAAI (vSphere APIs for Array Integration) and either an API call or storage array vendor plug-in, there’s no way you can change that. Specifically, you’d have to invoke the Reserve Space primitive on the storage array – if it supports this.
But that’s OK – we’re here to reclaim space, which means you’re running a thin disk on NFS anyway. And as the guest operating system writes data to the VMDK files, the size of the disk grows while the provisioned size stays static. For example, if you gave a 5 GB VMDK disk on NFS to a virtual machine, the initial size would be a few dozen MB and the provisioned size would be 5 GB. I’ve done exactly this below on the W: drive of my View Composer server:
Then some annoying user comes along and writes several GB of files to the W: drive. 😉
Now, the operating system is reporting that space is used on the disk. This makes sense, right? We just saw that annoying user copy several GB of data to the drive. In fact, 3.65 GB of data to be specific.
But that’s just the guest operating system’s perspective on the disk usage. Let’s see what vSphere sees:
The guest and vSphere numbers are pretty close to one another. Both the guest operating system and vSphere agree that data is on the disk. But what about when I delete the files from the guest operating system? Let’s compare the numbers after I wipe all the files off the W: drive.
As you can see, the guest operating system shows a nearly empty disk, while the vSphere size is still about 3.65 GB in size. The thin disk has grown to accommodate the files that were previously placed on it. It does not, however, shrink back down to size when files are deleted within the guest.
Reclaim Free Space with SDelete
Have no fear, this problem can be solved. Using the SDelete tool can help clean out the cobwebs on your virtual disk and free up the formerly used space. SDelete is a free command line tool that has the ability to clean and zero out free space. Kudos to Matt Liebowitz’s post on the topic that I have leveraged in the past. 🙂
A few things about SDelete:
- I commonly push out this tool to the C:\Windows\System32 folder on my Windows servers using a Group Policy Object (GPO)
- When running the tool from the Command Prompt (CMD), make sure you have it running as Administrator. Otherwise you will get permission errors.
The specific command and arguments to use is:
sdelete -c -z [drive]:
Where “-c” will clean and “-z” will zero. Older versions have this switched, so I just run both arguments to avoid hassle. Here’s a screenshot of me running SDelete inside of the guest operating system and the resulting file size after it has finished “zapping” my W drive:
Inflation of the VMDK file is a temporary thing and is caused by the way SDelete does its magic. The next step will reclaim all of that unused space.
vSphere has no native ability to offload a hot Storage vMotion to the NFS storage array without some sort of storage array vendor trickery. The NFS file must be read in and written out over the network. As such, you can use a Storage vMotion to reclaim unused space. In my case, I triggered a datastore migration for just the 5 GB VMDK file on my server to a new NFS datastore. This is because I have no need to move around my primary disk, which is 30 GB.
Let’s look under the covers at the network IO during the Storage vMotion. My ESX2 host is busy reading in the used data on the VMDK and simultaneously writing it out to its new home using vmk3 – this is my NFS interface.
Once the Storage vMotion is complete – and it should go rather quickly for an empty virtual disk – the results are superb. The VMDK file now uses a mere 30 MB on disk.
Don’t forget to Storage vMotion the virtual disk back to where it came from, if you desire.