Reclaim Error when Configuring Remote SSD for Host Cache

I recently purchased a pair of Intel SSD drives for use with a Synology DS411 enclosure, and thought it would be a fun exercise to present part of the volume to my hosts as an “officially recognized” SSD drive type. This would allow me to further configure (and toy around with) Host Cache. For those not familiar, this is where a drive can be used by the host for swapping out memory pages when physical memory is no longer available for whatever reason. While not as fast as memory, it’s much peppier than swapping to spinning disk.

What it looks like to use a remotely mounted SSD datastore for Host Cache

This post will briefly review the process to present remote storage as SSD (I have no clue if this is supported, but fun in a lab!) and the reclaim error, along with troubleshooting and ultimately resolving the issue. The content here is a bit advanced for those who have not worked with PSP (Path Selection Plugins), specifically NMP (Native Multipathing) which is provided by VMware, but should be easy enough to follow along with if you just want to tinker around in your lab.

Modifying the SATP Rule

Upon presenting my SSD storage to the hosts, I saw that the NMP SATP rule did not recognize the drive as being SSD, which means a bit of work is required using the esxcli command to modify the SATP rule that was in use. My SSD volume was mapped over iSCSI, so I used the NAA identifier to find out what SATP rule was in play and then modify it. I would highly suggest reading these great posts by both Duncan Epping and William Lam that go over this process fully, and kudos to them for doing such detailed and helpful write-ups!

To see the details of a device, use the command (substitute your naa or device number):

esxcli storage core device list -d naa.6001405d6ef5e9ad743ad3d87d9e10d3

As I’ve highlighted, the iSCSI disk has the SSD option set to false. In order to change the option to true, a new rule must be added. In order to create this rule, you must know the SATP in use for the device. To find this information, issue the command:

esxcli storage nmp device list -d naa.6001405d6ef5e9ad743ad3d87d9e10d3

In my case, NMP is using the “VMW_SATP_ALUA” SATP. We now have all that is required to add a new rule.

The command to use is:

esxcli storage nmp satp rule add -s VMW_SATP_ALUA -d naa.6001405d6ef5e9ad743ad3d87d9e10d3 -o=enable_ssd

There is no response to the input if you entered the command correctly. I added the “Tada” for dramatic effect. 🙂

The final step is to reclaim the device so that the new SATP rule with the enable_ssd option is applied.

esxcli storage core claiming reclaim -d naa.6001405d6ef5e9ad743ad3d87d9e10d3

Reclaim Error & Troubleshooting

Unfortunately, the following error appeared:

Unable to unclaim path vmhba38:C0:T2:L0 on device naa.6001405d6ef5e9ad743ad3d87d9e10d3. Some paths may be left in an unclaimed state. You will need to claim them manually using the appropriate commands or wait for periodic path claiming to reclaim them automatically.

I noticed quite a few others got this error as well. So, what did I do wrong?

Upon further inspection, it turned out that HA had decided to use the datastore for heartbeats. The host did not want to go forward with a reclaim task as the datastore was critical to the HA process.

With only 2 datastores attached, HA had no other choice than to use this device for hearbeats.

The fix was rather easy. I could either disable HA temporarily while I made the change or detach the device from each host and run the reclaim command again. I chose to simply turn off HA for the moment, as it was my lab. I suppose a reboot would have also worked. 🙂

 The end results are a device that shows up as SSD over iSCSI and can be used for Host Caching.

 I was then able to configure Host Cache for each host.


This was a fun exercise to go through, as it reinforced some learning of both esxcli commands (new in ESXi 5) along with a refresher on various SATP types. Perhaps good stuff to get into practice for the new VCAP5-DCA that should be out this year? 🙂

I’m not sure I’d really suggest swapping to SSD over iSCSI – it seems best to just stick a small SSD drive in each host. But, it’s a fun way to experiment in the lab. Your thoughts?