VMware vCenter SSO Update failure in 5.5

Upon upgrading SSO in vCenter, I ran into a failure during the process. The process would fail halfway through and hang at configuring SSO Components (shown below).  Upon the failure and investigating the msi log, it was showing a DNS type error.

SSO_Error

This only affects you if you are upgrading from 5.1.  Anyone upgrading from 5.5, 4.x or a fresh install of 5.1 update 1a or later will not experience this issue.  And obviously, does not affect you if you are doing a fresh install of 5.5.

The fix is actually rather simple but confusing by the wording.  The change is made in the registry at the following key:

HKEY_LOCAL_MACHINE\Software\VMware, Inc.\VMware Infrastructure\SSOServer\FQDNIp

If the value inside this registry key is an IP, you must change it to the actual FQDN of your SSO server.  For me, mine was on my vCenter server, so it was set to its FQDN.
The actual key name is FQDNIp, but it needs to be the name.  Hence the confusion!

More information can be found in the following VMware KB.

VMworld 2013 Keynote Announcements

At today’s keynote during the 10th annual VMworld in San Francisco, CEO Pat Gelsinger made some announcements to where VMware is heading and the updates to its current product line.  As VMware’s road-map has continually pointed to the Software-Defined Data Center, they have solidified this premise with some incredible announcements today. Let’s get started:

vSphere 5.5

vSphere 5.5 will be the next update to their vSphere platform that currently sits at 5.1.  This includes upgrading the virtual hardware version to 10 and includes enhancements to AHCI and new graphics support for Intel and AMD GPUs.  Hardware version 10 also includes support for CPU-C states which will take advantage of new CPU enhancements.  Finally, they have added support for hot swappable SSD PCIe devices no longer requiring the host to be offline.

Changes to vSphere 5.5 also include new limits to the physical hosts hardware setup:

  • Maximum RAM is now 4TB (Previously 2TB)
  • Virtual CPUs per host is now 4096 (Previously 2048)
  • NUMA nodes per host is now 16 (Previously 8)
  • Logical CPUs per host is now 320 (Previously 160)
  • VMDK files of 64TB file size for VMFS5 as well as NFS

The vCenter Server and Appliance have also seen some significant changes as well. The appliance will now support as many as 500 vSphere hosts and up to 5000 virtual machines.  The vSphere Web Client has been updated to now support OSX for console access to your VMs, deploying templates and attaching client devices (about time considering most VMware employees are rocking Macbooks).  The Web Client updates include new drag and drop functionality as well as filtering your searches inside the client.

There were finally many improvements to the SSO functionality within vSphere Server.  Most admins will admit this is the most painful part of any vSphere server install or update.  VMware has listened and has improved the integration between SSO and Microsoft Active Directory and a more simplified installation process.

Virtual SAN

One other piece that has caught my attention is the new Virtual SAN that is currently in its beta phase.  The Virtual SAN will allow you to use the local storage on your hosts to create a shared datastore for them to use.  Many of you may recognize this as VMware’s Virtual Storage Appliance or VSA.  However, the difference is that the Virtual SAN will be built into the ESXi 5.5 hypervisor.  At this time, it will still require an additional license and is expected to be released within the first half of 2014.

VMware NSX

NSX is VMware’s new network virtualization layer.  NSX will be the network hypervisor of sorts that will provide the virtual layer over your physical network.  This will enable you to virtualize your network.  NSX will be implemented as an extension of the Virtual Switch and will work with all existing network hardware.  VMware is expecting to release NSX by the end of 2013.

Overall, there are some very exciting things coming from VMware in the very new future and they are absolutely changing the face of the data center.  I have already signed up for the Virtual SAN beta and looking forward to getting it setup and tested in my environment.

MTU Mismatch and Slow I/O

After a month or two of troubleshooting some storage issues we have been having with our NetApp system, we dug up an interesting piece of information.  When reviewing the MTU size on the host and on the NetApp, we noticed that the host was set for 1500 and the NetApp interface was set at 9000.  Whoops!

Before troubleshooting, we were seeing I/O at a rate of about 2500 IOPS to the NetApp system. However, when making the MTU change to match on both the ESXi host and the NetApp, we saw IOPS jump to close to 10,000.  Just a quick breakdown of what was happening here:

  1. The host would send data with an MTU of 1500 to the NetApp.
  2. The NetApp would retrieve the data and try to send it back at 9000
  3. It would fail from the switch stating it could only accept 1500
  4. The NetApp would then have to translate the data down to 1500

Basically, we were doubling the time it took to return the data back to the host and in turn to the guest VM.  The slow I/O was due to the translation time on the NetApp to get the proper data back to the host.  The switch interface was also set at 1500 and was rejecting the traffic.

Word to the wise: Always double check MTU settings and ensure it is the same through the entire path back to the host.  Just another one of those things to have in your back pocket when troubleshooting.

Datastore not visible after upgrading to ESXi 5

After upgrading my dev datacenter and rebooting the first ESXi 5 host, I realized that one of my fiber datastores was missing.  The path to the datastore was still visible to the host under the HBA, but it was not showing as an available datastore in the storage view.  Upon investigation, the datastore had been tagged as a snapshot datastore and was not mounting properly to the host.  This can be found by running the following:

esxcli storage vmfs snapshot list

You will see an output similar to:

<UDID>

   Volume Name: <VOLUME_NAME>

   VMFS UUID: <UDID>

   Can mount: true

   Reason for un-mountability:

   Can resignature: false

   Reason for non-resignaturability: the volume is being actively used

   Unresolved Extent Count: 2

Next, I had to force mount the datastore in CLI by first changing to “/var/log” and running:

esxcli storage vmfs snapshot mount -u <UUID> -l <VOLUME_NAME>

The command will be persistent across reboots.  If you would like to make it non-persistent then you will need to add “-n” to your command.  Once it is run, check your host and the datastore should be showing as an available datastore again.  No reboot needed and the change takes affect immediately.

You can also mount the datastore using the vSphere client as well by following the below steps:

  1. Go to your host in question
  2. On the storage tab, click add storage
  3. Choose disk/LUN
  4. Find the LUN that is missing. If it is not shown, you will need to use the above steps to mount using CLI
  5. Under mount options, choose “Keep Existing Signature” to mount persistent across reboots
  6. Click through to finish

There are a few caveats to force mounting a datastore though.  The datastore can only be mounted if it doesn’t already exist with a unique UDID.  If you choose to use the client to force mount the datastore, it cannot be mounted to other hosts in the same datacenter.  You will need to use the CLI steps posted above to mount to other hosts.

For more information about this issue and steps to fix in ESX/ESXi 4 and 3.5, you can find the VMware KB here.

Free eBook: vSphere 5.0 Clustering Deepdive June 5th and 6th

I know that vSphere 5.0 has been out for quite some time and most of you out there are probably running 5.1 at this point, but hey it’s a free book!  The vSphere 5.0 Clustering Deepdive and vSphere 4.1 HA and DRS Deepdive will both be free on the Amazon Kindle store for two days only: June 5th and June 6th.

413xcD9GTZL._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA278_PIkin4,BottomRight,-59,22_AA300_SH20_OU01_

Both books are written by Duncan Epping and Frank Denneman.  The two masterminds behind both HA and DRS.  If you haven’t been reading their blogs yet, get on it. Although the book may be a version behind, there is still some great information in here than anyone can use.  And if you are still running 5.0 (like me in my lab), then this is a great pickup for you.

The book goes into depth on the components of HA, DRS and Storage DRS and breaks them down to better understand the architecture behind them.  It will go over Resource Pools, Datastore Clustering, Resource Allocation and more.  And to quote the Amazon Kindle store: “This book is also the ultimate guide to be prepared for any HA, DRS or Storage DRS related question or case study that might be presented during VMware VCDX, VCP and or VCAP exams.”

So get downloading and enjoy some light reading!

vNUMA: A VMware Admin’s Guide

Since I’ve been discussing this recently at work with developers and managers due to some SQL Server issues we have been having, I decided it was time to write about it.  vNUMA was introduced along with vSphere 5 back in the day and is related to NUMA (Non Uniform Memory Architecture). In a nutshell, vNUMA allows you to divide your CPU and Memory resources into different nodes and allows for faster memory access.

vNUMAThe image above shows how NUMA breaks up the resources to create the nodes.  This is also a setting that can be controlled within VMware.  vNUMA is typically used for your “Monster VMs”, where your virtual CPUs span multiple physical CPUs. For example, if your virtual machine has 12 CPUs and your physical host has 8 cores, you are spanning multiple CPUs.  vNUMA is also recommended when using more than 8 vCPUs.  To change the NUMA setting (Keep in mind that the VM must be powered off to make this change):

  1. Select your VM
  2. Choose Edit Settings…
  3. Go to the Options tab
  4. Go to Advanced -> General
  5. Click the button “Configuration Parameters”
  6. Add the following “numa.vcpu.min” and set to the number of CPUs you want used in the NUMA node.

One thing to keep in mind with NUMA is that you want to size the NUMA nodes according to the number of cores in your physical host.  For example, my physical hosts have a six core processor, so my NUMA node on the SQL Server is set to 6.  This gives my 12 processor SQL Server 2 NUMA nodes with 6 CPUs each.  By default, when you create a VM with more than 8 vCPUs, vNUMA is enabled by default.  Using the instructions above, you can change the vNUMA value or set it on VMs with less than 8 vCPUs.

vNUMA is very specific to certain use cases and should be used with caution.  Incorrectly configuring NUMA can cause more problems than leaving it at the default.  Be sure to test your settings on a non production server and see if the results were as expected.  One final thought and last thing to keep in mind, ensure that the hosts in your cluster when vNUMA is being used have similar NUMA configurations to avoid issues when the VM decides to vMotion to a different host.

Image courtesy of brentozar.com

 

Script to Add Multiple NFS Datastores to an ESXi Host

I am sure I’m not the first admin that has needed to add an NFS datastore to multiple hosts, and usually it’s multiple datastores that are needed as well. Normally, I would go to each host, and add the storage manually via the vSphere Client.  But after doing this for quite some time, I decided I needed a better way to get this task done.  Scripting!

I have recently started to use PowerCLI to automate many of my daily tasks (Hint: more scripts to come) and decided to share the one I’ve used the most first.  The script takes an input after running, asking you to input your host name.  Inside the script, you will set the datastores you want to add to the host.  Those are the only changes that are needed when being run.  This script has saved me a great deal of man hours and some headaches.  A great advantage to scripting is that you avoid those sneaky spelling mistakes or clicking the wrong button.

The script is below here (The bold parts are what need to be changed by you):

$VMHost = (Read-Host “Please enter the name of your ESX Host you wish to configure”)

$shellObject = new-object -comobject wscript.shell
$intAnswer = $shellObject.popup(“Do you want to add the datastores?”, 0,”Add datastores – remember you must have added the hosts on the storage”,4)
If ($intAnswer -eq 6) {
Write “Creating Datastores on $VMHost…”
New-Datastore -Nfs -VMHost $VMHost -Name DatastoreName1 -Path /vol/DatastoreName1 -NfsHost 192.168.255.251
New-Datastore -Nfs -VMHost $VMHost -Name DatastoreName2 -Path /vol/DatastoreName2 -NfsHost 192.168.255.251
New-Datastore -Nfs -VMHost $VMHost -Name DatastoreName3 -Path /vol/DatastoreName3 -NfsHost 192.168.255.251
} else {
Write “Skipping Datastores on $VMHost…”
}

When running the script, you will see the following output asking you to put in your hostname:

addnfsdatastore

You will then see a successful message for each datastore added to the host. You must remember to add the host to the NFS Export share on the storage itself before completing this step.

Script courtesy of VMware PowerCLI Blog

VMware releases vCenter Server 5.1 Update 1a

vmware_logo

VMware released vCenter Server 5.1 Update 1a on May 22nd in reply to a known issue that users were seeing related to logging into vCenter.  Users that had a large number of AD groups in their environment would see an error when logging in using SSO.  This issue has been fixed in Update 1a and some other improvements were snuck in as well:

  • vCenter server can now run on Windows Server 2012
  • vCenter not supports SQL Server 2008 R2 and SQL Server 2012
  • You can now customize the following Guest OS’s: Windows 8, Windows Server 2012, RHEL 5.9 and Ubuntu 12.04
  • The vRAM limit of 192GB has been removed
  • And other bug fixes

The upgrade and installation process is the same as previous releases.  You can do an in place upgrade from 4.x and up or you can always do a new install.  And remember that all the new features within vCenter can only be accessed through the web client, so ditch the desktop client and fully utilize your vCenter.

You can read the full version of VMware’s release notes for vCenter server 5.1u1a here.

New Blogger, Virtual Champion

After over 7 years in the IT business, I felt it appropriate to start my own blog to store my ideas and problems that I run into during my work.  This will be my place to share knowledge with the world and others in the IT industry.  I hope this blog will help you and I will continue to update content regularly with tips, how-to’s and news in the virtual world.