How to manage ESXi hosts remotely with PowerCLI

How to manage ESXi hosts remotely with PowerCLI

How to manage ESXi hosts remotely with PowerCLI

When choosing to administer VMware-based virtual machines (VMs), administrators have a few decisions to make when prepping their bare-metal hosts and configuring the guest OSes, the storage spaces, and switches used to communicate with each other and across networks. The choices center on how to…Read More


VMware Social Media Advocacy

What’s new for vSAN 6.6?

What’s new for vSAN 6.6? [www.yellow-bricks.com]

What’s new for vSAN 6.6?

Yes this may confuse you a bit, a new vSAN release namely vSAN 6.6 but it doesn’t coincide with a vSphere release. That is right, this is a “patch” release for vSphere but a major version for vSAN! It seems like yesterday that we announced 6.2 with Stretched Clustering and 6.5 with iSCSI and 2-Node Direct Connect. vSAN 6.6 brings some exciting new functionality and a whole bunch of improvements. Note that there were already various performance enhancements introduced in vSphere 6.0 Update 3 for vSAN 6.2. Anyway, what’s new for vSAN 6.6?


VMware Social Media Advocacy

SuperMicro vs Intel NUC

The debate between Homelaber’s recently has been SuperMicro vs Intel NUC.  Both have pros and cons attached with them.  I personally went with the Intel NUC for my homelab creating a single node vSAN.  The article below gives a great run down of both systems for the homelab.

SuperMicro vs Intel NUC

A couple of weeks ago I was talking to William Lam (http://www.virtuallyghetto.com/) and Alan Renouf (http://www.virtu-al.net/) about their exciting USB to SDDC demonstration, they were using an Intel NUC to deploy a VMware SDDC environment to a single node using VSAN. I offered them the opportunity to test out the same capability with one of my SuperMicro E200-8D servers and they took me up on the opportunity. Since then I have been approached by a number of people with requests for information about why I chose to go with the SuperMicro E200 for my home lab over the Intel NUC. I’ve never written a blog before but I thought this might be a good way to “cut out the middle man” so to speak. So here it goes, my reasons for why I chose the…Read More


VMware Social Media Advocacy

Released: vCenter and ESXi 6.0 Update 3 –…

Released: vCenter and ESXi 6.0 Update 3 – What’s in It for Service Providers — via VIRTUALIZATION IS LIFE!

Released: vCenter and ESXi 6.0 Update 3 –…

Last month I wrote a blog post on upgrading vCenter 5.5 to 6.0 Update 2 and during the course of writing that blog post I conducted a survey on which version of vSphere most people where seeing out in the wild…overwhelmingly vSphere 6.0 was the most popular version with 5.5 second and 6.5 lagging in adoption for the moment.


VMware Social Media Advocacy

VMware Security Advisory for ESXi 6.0 and 5.5

Vmware released a new security advisory today advising that ESXi versions 6.0 and 5.5 are vulnerable to Cross-Site Scripting (XSS).  The details of the advisory can be found below as well as the current solution.

Advisory ID: VMSA-2016-0023

Severity:    Important

Synopsis:    VMware ESXi updates address a cross-site

scripting issue

Issue date:  2016-12-20

Updated on:  2016-12-20 (Initial Advisory)

CVE number:  CVE-2016-7463

  1. Summary

VMware ESXi updates address a cross-site scripting issue.

 

  1. Relevant Releases

VMware vSphere Hypervisor (ESXi)

 

  1. Problem Description
  1. Host Client stored cross-site scripting issue

 

The ESXi Host Client contains a vulnerability that may allow for

stored cross-site scripting (XSS). The issue can be introduced by

an attacker that has permission to manage virtual machines through

ESXi Host Client or by tricking the vSphere administrator to import

a specially crafted VM. The issue may be triggered on the system

from where ESXi Host Client is used to manage the specially crafted

VM.

VMware advises not to import VMs from untrusted sources.

VMware would like to thank Caleb Watt (@calebwatt15) for reporting

this issue to us.

The Common Vulnerabilities and Exposures project (cve.mitre.org) has

assigned the identifier CVE-2016-7463 to this issue.

 

Column 4 of the following table lists the action required to

remediate the vulnerability in each release, if a solution is

available.

VMware  Product Running             Replace with/

Product Version on       Severity    Apply Patch*        Workaround

======= ======= =======  ========   =============        ==========

ESXi     6.5    ESXi

N/A        not affected           N/A

ESXi     6.0    ESXi

Important  ESXi600-201611102-SG   None

ESXi     5.5    ESXi

Important  ESXi550-201612102-SG   None

*The fling version which resolves this issue is 1.13.0.

 

  1. Solution

Please review the patch/release notes for your product and

version and verify the checksum of your downloaded file.

 

ESXi 6.0

————-

Downloads:

https://www.vmware.com/patchmgr/findPatch.portal

Documentation:

http://kb.vmware.com/kb/2145815

 

ESXi 5.5

————

Downloads:

https://www.vmware.com/patchmgr/findPatch.portal

Documentation:

http://kb.vmware.com/kb/2148194

 

  1. References

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-7463

Source: http://www.vmware.com/security/advisories/VMSA-2016-0023.html

Disable the “This host currently has no management network redundancy” message

Let’s go over how to disable the “This host currently has no management network redundancy” message.  It’s annoying and we can get rid of the yellow triangles that show on the hosts due to this message.  And I know, you “should” have redundancy on your management network but we’re just not worried about it.  Our hosts are in our building and not at a co-lo so we have constant access to them in the event something happens and we need access.

Management Network Redundancy WarningSince we don’t care about this warning, I wanted to hide it.  This way we can see if there are actual errors on the host and not some warning about network redundancy.  The fix is done with an advanced option in the cluster properties. In the cluster properties, under vSphere HA, select Advanced Options.  Then add an option named das.ignoreRedundantNetWarning and set the Value to true.

ignoreRedundantNetWarningAnd that’s it! Once the option is in, go to each host and reconfigure for vSphere HA.  The warning will then disappear and your vCenter will look clean again.

Update Manager fails to scan ESXi host

I had an issue when using update manager to scan a host and it would fail with a “Check log for errors blah blah blah”.  The errors are never useful.  After investigating and checking the log, I found the following entry:

2015-06-02T20:57:30.312-07:00 [00504 info ‘VcIntegrity’] Error on logout (ignored): vim.fault.NotAuthenticated

When researching the issue, I came across a best practices guide that basically pointed out the error.  So as a general best practice, verify on your hosts, on the configuration tab, under DNS and Routing, that you have filled in all the fields. I had missed the Search Domain field on that particular host which was causing the scan to fail.

DNS and RoutingThe fields with the blue boxes will cause the Update Manager scan failure as well as cause other issues with DNS on the host.  Just another one of things to check!

As always, leave questions or suggestions in the comments.

 

ESXi 5.1 Host out of sync with VDS

A recent issue I was having was that our ESXi 5.1 hosts would go “out of sync” with the VDS.  The only fix that would work was rebooting the host.  After digging into the log file, I discovered that the host was failing to get state information from the VDS. The entries are below:

value = “Failed to get DVS state from vmkernel Status (bad0014)= Out of memory”,
             }
          ],
          message = “Operation failed, diagnostics report: Failed to get DVS state from vmkernel Status (bad0014)= Out of memory”

The issue is a bug in the version of 5.1 we were running (Update 2 at the time) and is a memory leak on the host when using E1000 NICs in your VMs.  Because these VMs were created a long time ago, they were defaulted to the E1000.  The fix for this issue is updating to the latest build of ESXi which has a fix for the issue.  And also, don’t use E1000 NICs, always go with VMXNET3.  Problem solved!

Lost Path Redundancy to Storage Device

After installing 3 new hosts, I kept getting errors for Storage Connectivity stating “Lost path redundancy to storage device naa…….”.  We had 2 fibre cards and one of the paths was being marked as down.  I spent a couple weeks troubleshooting and trying different path selection techniques.  Still, we would randomly get alerts that the redundant path has gone down.  The only fix was to reboot the host, as not even a rescan would bring the path back up.

So after some trial and error, I found a solution.  The RCA isn’t necessarily complete yet, but I believe it was a problem with the fibre switch having an outdated firmware and us using new fibre cards in our hosts.  When using the path selection of Fixed, it would randomly pick an hba to use for each datastore.  Some datastores would use path 2 and some would use path 4.

The solution I came up with was to manually set the preferred path on each datastore (we have about 40, so it was no easy task).  You go into your host configuration, choose storage, pick a datastore and go into properties.  Inside this window, select manage paths from the bottom right and you should see your HBA’s listed.  There is a column marked Preferred with an asterisk showing which hba to prefer for the datastore (see the image below).  I went through and manually set the preferred path to be hba2 instead of letting vmware pick the path. The path selection is persistent across reboot as well when setting it manually.

storage path selectionSince manually setting the preferred path, the hosts have been stable and we have not gotten any more errors about path redundancy.  This is pretty much a band aid fix but at least we are not rebooting hosts 2-3 times per week.