How To Reclaim Unused Disk Space on NFS Storage

The question of reclaiming space on NFS based VMDKs popped up while at the Toronto VMUG User Conference. I thought it’d be worth a revisit to this topic – not just from a technical “how to” perspective, but also to discuss exactly what the question is all about, why it happens, and then how to fix it.

I’m most familiar with the vendor plug-in model, such as NetApp’s VSC, and have used that to shrink VMDK sprawl, but not everyone has access to tools like that. Thus, I’ll go over the tried and true method for reclaiming space from virtual machines that use NFS for storage.

[symple_box color=”yellow” text_align=”left” width=”100%” float=”none”]
Keep in mind that a storage array that’s able to compress and/or deduplicate the underlying NFS volumes will make this effort largely irrelevant for most situations.
[/symple_box]

Writing to Disk

Let’s first talk about how NFS disks grow. vSphere’s default behavior is to use thin provisioning for all VMDKs that sit on NFS storage. Without the aid of VAAI (vSphere APIs for Array Integration) and either an API call or storage array vendor plug-in, there’s no way you can change that. Specifically, you’d have to invoke the Reserve Space primitive on the storage array – if it supports this.

[symple_box color=”yellow” text_align=”left” width=”100%” float=”none”]
New to VAAI? Check out a list of block storage primitives and their meanings here.
[/symple_box]

But that’s OK – we’re here to reclaim space, which means you’re running a thin disk on NFS anyway. And as the guest operating system writes data to the VMDK files, the size of the disk grows while the provisioned size stays static. For example, if you gave a 5 GB VMDK disk on NFS to a virtual machine, the initial size would be a few dozen MB and the provisioned size would be 5 GB. I’ve done exactly this below on the W: drive of my View Composer server:

Just an lonely, empty 5 GB disk
Just an lonely, empty 5 GB disk

Then some annoying user comes along and writes several GB of files to the W: drive. ๐Ÿ˜‰

Copying files is fun
Copying files is fun

Now, the operating system is reporting that space is used on the disk. This makes sense, right? We just saw that annoying user copy several GB of data to the drive. In fact, 3.65 GB of data to be specific.

2 files on the W: drive take up 3.7 GB of space
2 files on the W: drive take up 3.65 GB of space

But that’s just the guest operating system’s perspective on the disk usage. Let’s see what vSphere sees:

The size has grown, but not as big as the provisioned size
The size has grown, but not as big as the provisioned size

The guest and vSphere numbers are pretty close to one another. Both the guest operating system and vSphere agree that data is on the disk. But what about when I delete the files from the guest operating system? Let’s compare the numbers after I wipe all the files off the W: drive.

The OS and vSphere size on disk no longer match
The OS and vSphere size on disk no longer match

As you can see, the guest operating system shows a nearly empty disk, while the vSphere size is still about 3.65 GB in size. The thin disk has grown to accommodate the files that were previously placed on it. It does not, however, shrink back down to size when files are deleted within the guest.

Reclaim Free Space with SDelete

Have no fear, this problem can be solved. Using the SDelete tool can help clean out the cobwebs on your virtual disk and free up the formerly used space. SDelete is a free command line tool that has the ability to clean and zero out free space. Kudos to Matt Liebowitz’s post on the topic that I have leveraged in the past. ๐Ÿ™‚

Hello, my name is SDelete
Hello, my name is SDelete

A few things about SDelete:

  • I commonly push out this tool to the C:\Windows\System32 folder on my Windows servers using a Group Policy Object (GPO)
  • When running the tool from the Command Prompt (CMD), make sure you have it running as Administrator. Otherwise you will get permission errors.

[symple_box color=”red” text_align=”left” width=”100%” float=”none”]
Using SDelete will cause the virtual disk size to inflate all the way up to the provisioned size. Be sure that you have space for that or stagger your cleanups among small batches of VMs.
[/symple_box]

The specific command and arguments to use is:

sdelete -c -z [drive]:

Where “-c” will clean and “-z” will zero. Older versions have this switched, so I just run both arguments to avoid hassle.ย Here’s a screenshot of me running SDelete inside of the guest operating system and the resulting file size after it has finished “zapping” my W drive:

The size on disk has nearly reached the provisioned size
The size on disk has nearly reached the provisioned size

Inflation of the VMDK file is a temporary thing and is caused by the way SDelete does its magic. The next step will reclaim all of that unused space.

Storage vMotion

vSphere has no native ability to offload a hot Storage vMotion to the NFS storage array without some sort of storage array vendor trickery. The NFS file must be read in and written out over the network. As such, you can use a Storage vMotion to reclaim unused space. In my case, I triggered a datastore migration for just the 5 GB VMDK file on my server to a new NFS datastore. This is because I have no need to move around my primary disk, which is 30 GB.

You can migrate the entire VM or just the cleaned up disk
You can migrate the entire VM or just the cleaned up disk

Let’s look under the covers at the network IO during the Storage vMotion. My ESX2 host is busy reading in the used data on the VMDK and simultaneously writing it out to its new home using vmk3 – this is my NFS interface.

The hypervisor looks for data to read/write on the VMDK
The hypervisor looks for data to read/write on the VMDK

Results

Once the Storage vMotion is complete – and it should go rather quickly for an empty virtual disk – the results are superb. The VMDK file now uses a mere 30 MB on disk.

30 MB? That looks much better!
30 MB? That looks much better!

Don’t forget to Storage vMotion the virtual disk back to where it came from, if you desire.