Unable to remove a datastore from vCenter Server Inventory

I recently had an issue where I was unable to remove a datastore from the vCenter Server Inventory.  The datastore was grayed out and when right-clicking, had no options.  After some digging and some research in SQL, I found a way to manually do this in the vCenter database.  Every datastore is given a unique ID and can be found and removed inside of the database.

Warning: Always make a SQL backup before attempting any manual database changes.  You never know when things might break and you need to restore.

So here we go:

  1. Stop the vCenter Server Service
  2. Open SQL Management Studio
  3. Run the following against your vCenter Server database (This will give you the datastore ID):

select ID from VPX_ENTITY where name = ‘datastore_name’

  1. Now we have the ID and can remove it from the database
  2. Run the following 3 queries individually (Using the ID we got from the previous query):

delete from VPX_DS_ASSIGNMENT where DS_ID=ID;
delete from VPX_VM_DS_SPACE where DS_ID=
ID;
delete from VPX_DATASTORE where ID=
ID;

  1. Finally, run the following:

delete from VPX_ENTITY where ID=ID;

If you want to verify that everything went correctly, you can run the following:

select * from VPX_DS_ASSIGNMENT where DS_ID=ID;
select * from VPX_VM_DS_SPACE where DS_ID=ID;
select * from VPX_DATASTORE where ID=ID;
select * from VPX_ENTITY where ID=ID;

Now you’ve removed the datastore from the database and can start the vCenter Server Service again. If you don’t see that it has been removed, a reboot may help. I rebooted my server just to be on the safe side.

You can check out this VMware KB for more info.

MTU Mismatch and Slow I/O

After a month or two of troubleshooting some storage issues we have been having with our NetApp system, we dug up an interesting piece of information.  When reviewing the MTU size on the host and on the NetApp, we noticed that the host was set for 1500 and the NetApp interface was set at 9000.  Whoops!

Before troubleshooting, we were seeing I/O at a rate of about 2500 IOPS to the NetApp system. However, when making the MTU change to match on both the ESXi host and the NetApp, we saw IOPS jump to close to 10,000.  Just a quick breakdown of what was happening here:

  1. The host would send data with an MTU of 1500 to the NetApp.
  2. The NetApp would retrieve the data and try to send it back at 9000
  3. It would fail from the switch stating it could only accept 1500
  4. The NetApp would then have to translate the data down to 1500

Basically, we were doubling the time it took to return the data back to the host and in turn to the guest VM.  The slow I/O was due to the translation time on the NetApp to get the proper data back to the host.  The switch interface was also set at 1500 and was rejecting the traffic.

Word to the wise: Always double check MTU settings and ensure it is the same through the entire path back to the host.  Just another one of those things to have in your back pocket when troubleshooting.

Datastore not visible after upgrading to ESXi 5

After upgrading my dev datacenter and rebooting the first ESXi 5 host, I realized that one of my fiber datastores was missing.  The path to the datastore was still visible to the host under the HBA, but it was not showing as an available datastore in the storage view.  Upon investigation, the datastore had been tagged as a snapshot datastore and was not mounting properly to the host.  This can be found by running the following:

esxcli storage vmfs snapshot list

You will see an output similar to:

<UDID>

   Volume Name: <VOLUME_NAME>

   VMFS UUID: <UDID>

   Can mount: true

   Reason for un-mountability:

   Can resignature: false

   Reason for non-resignaturability: the volume is being actively used

   Unresolved Extent Count: 2

Next, I had to force mount the datastore in CLI by first changing to “/var/log” and running:

esxcli storage vmfs snapshot mount -u <UUID> -l <VOLUME_NAME>

The command will be persistent across reboots.  If you would like to make it non-persistent then you will need to add “-n” to your command.  Once it is run, check your host and the datastore should be showing as an available datastore again.  No reboot needed and the change takes affect immediately.

You can also mount the datastore using the vSphere client as well by following the below steps:

  1. Go to your host in question
  2. On the storage tab, click add storage
  3. Choose disk/LUN
  4. Find the LUN that is missing. If it is not shown, you will need to use the above steps to mount using CLI
  5. Under mount options, choose “Keep Existing Signature” to mount persistent across reboots
  6. Click through to finish

There are a few caveats to force mounting a datastore though.  The datastore can only be mounted if it doesn’t already exist with a unique UDID.  If you choose to use the client to force mount the datastore, it cannot be mounted to other hosts in the same datacenter.  You will need to use the CLI steps posted above to mount to other hosts.

For more information about this issue and steps to fix in ESX/ESXi 4 and 3.5, you can find the VMware KB here.

Script to Add Multiple NFS Datastores to an ESXi Host

I am sure I’m not the first admin that has needed to add an NFS datastore to multiple hosts, and usually it’s multiple datastores that are needed as well. Normally, I would go to each host, and add the storage manually via the vSphere Client.  But after doing this for quite some time, I decided I needed a better way to get this task done.  Scripting!

I have recently started to use PowerCLI to automate many of my daily tasks (Hint: more scripts to come) and decided to share the one I’ve used the most first.  The script takes an input after running, asking you to input your host name.  Inside the script, you will set the datastores you want to add to the host.  Those are the only changes that are needed when being run.  This script has saved me a great deal of man hours and some headaches.  A great advantage to scripting is that you avoid those sneaky spelling mistakes or clicking the wrong button.

The script is below here (The bold parts are what need to be changed by you):

$VMHost = (Read-Host “Please enter the name of your ESX Host you wish to configure”)

$shellObject = new-object -comobject wscript.shell
$intAnswer = $shellObject.popup(“Do you want to add the datastores?”, 0,”Add datastores – remember you must have added the hosts on the storage”,4)
If ($intAnswer -eq 6) {
Write “Creating Datastores on $VMHost…”
New-Datastore -Nfs -VMHost $VMHost -Name DatastoreName1 -Path /vol/DatastoreName1 -NfsHost 192.168.255.251
New-Datastore -Nfs -VMHost $VMHost -Name DatastoreName2 -Path /vol/DatastoreName2 -NfsHost 192.168.255.251
New-Datastore -Nfs -VMHost $VMHost -Name DatastoreName3 -Path /vol/DatastoreName3 -NfsHost 192.168.255.251
} else {
Write “Skipping Datastores on $VMHost…”
}

When running the script, you will see the following output asking you to put in your hostname:

addnfsdatastore

You will then see a successful message for each datastore added to the host. You must remember to add the host to the NFS Export share on the storage itself before completing this step.

Script courtesy of VMware PowerCLI Blog