Datastore not visible after upgrading to ESXi 5

After upgrading my dev datacenter and rebooting the first ESXi 5 host, I realized that one of my fiber datastores was missing.  The path to the datastore was still visible to the host under the HBA, but it was not showing as an available datastore in the storage view.  Upon investigation, the datastore had been tagged as a snapshot datastore and was not mounting properly to the host.  This can be found by running the following:

esxcli storage vmfs snapshot list

You will see an output similar to:

<UDID>

   Volume Name: <VOLUME_NAME>

   VMFS UUID: <UDID>

   Can mount: true

   Reason for un-mountability:

   Can resignature: false

   Reason for non-resignaturability: the volume is being actively used

   Unresolved Extent Count: 2

Next, I had to force mount the datastore in CLI by first changing to “/var/log” and running:

esxcli storage vmfs snapshot mount -u <UUID> -l <VOLUME_NAME>

The command will be persistent across reboots.  If you would like to make it non-persistent then you will need to add “-n” to your command.  Once it is run, check your host and the datastore should be showing as an available datastore again.  No reboot needed and the change takes affect immediately.

You can also mount the datastore using the vSphere client as well by following the below steps:

  1. Go to your host in question
  2. On the storage tab, click add storage
  3. Choose disk/LUN
  4. Find the LUN that is missing. If it is not shown, you will need to use the above steps to mount using CLI
  5. Under mount options, choose “Keep Existing Signature” to mount persistent across reboots
  6. Click through to finish

There are a few caveats to force mounting a datastore though.  The datastore can only be mounted if it doesn’t already exist with a unique UDID.  If you choose to use the client to force mount the datastore, it cannot be mounted to other hosts in the same datacenter.  You will need to use the CLI steps posted above to mount to other hosts.

For more information about this issue and steps to fix in ESX/ESXi 4 and 3.5, you can find the VMware KB here.