Dell FX2 Blade Software iSCSI Adapters not persistent on reboot


I recently had an issue with some Dell FX2 Blade Software iSCSI Adapters that were not persistent on a reboot.  This meant, every time the host was rebooted we had to go in and manually recreate the iSCSI adapter so the hosts could see storage.  Obviously, that is not a solution.  The blades are running the latest version of ESXi 6.0 and all drivers on the blades are up to date as well, with no affect on the behavior.  After a bit of research I discovered parameters for the bnx2fc adapters.  One named bnx2fc_autodiscovery, which is the parameter to control auto FCoE discovery during system boot is what I was after.  By default this is set to disabled which is why the adapters are not showing up after the host is rebooted.

FX2 Blade Chassis


The command to see these parameters is:

esxcli system module parameters list -m bnx2fc

This will show you what they are currently set to, and next to bnx2fc_autodiscovery you will see it is blank, meaning it’s disabled.  In order to enable it, you run the following:

esxcli system module parameters set module bnx2fc parameterstring bnx2fc_autodiscovery=1
Software iSCSI Adapter Autodiscovery
You will see now that it is set to 1, meaning that autodiscovery is enabled and your software iSCSI adapters will be found automatically after a reboot.  Reboot the host and verify your adapter comes back up as expected.  This issue has not been documented in any Dell or VMware KBs to my knowledge.
I hope this helps anyone else having an issue.

Released: vCenter and ESXi 6.0 Update 3 –…

Released: vCenter and ESXi 6.0 Update 3 – What’s in It for Service Providers — via VIRTUALIZATION IS LIFE!

Released: vCenter and ESXi 6.0 Update 3 –…

Last month I wrote a blog post on upgrading vCenter 5.5 to 6.0 Update 2 and during the course of writing that blog post I conducted a survey on which version of vSphere most people where seeing out in the wild…overwhelmingly vSphere 6.0 was the most popular version with 5.5 second and 6.5 lagging in adoption for the moment.

VMware Social Media Advocacy

VMware Security Advisory for ESXi 6.0 and 5.5

Vmware released a new security advisory today advising that ESXi versions 6.0 and 5.5 are vulnerable to Cross-Site Scripting (XSS).  The details of the advisory can be found below as well as the current solution.

Advisory ID: VMSA-2016-0023

Severity:    Important

Synopsis:    VMware ESXi updates address a cross-site

scripting issue

Issue date:  2016-12-20

Updated on:  2016-12-20 (Initial Advisory)

CVE number:  CVE-2016-7463

  1. Summary

VMware ESXi updates address a cross-site scripting issue.


  1. Relevant Releases

VMware vSphere Hypervisor (ESXi)


  1. Problem Description
  1. Host Client stored cross-site scripting issue


The ESXi Host Client contains a vulnerability that may allow for

stored cross-site scripting (XSS). The issue can be introduced by

an attacker that has permission to manage virtual machines through

ESXi Host Client or by tricking the vSphere administrator to import

a specially crafted VM. The issue may be triggered on the system

from where ESXi Host Client is used to manage the specially crafted


VMware advises not to import VMs from untrusted sources.

VMware would like to thank Caleb Watt (@calebwatt15) for reporting

this issue to us.

The Common Vulnerabilities and Exposures project ( has

assigned the identifier CVE-2016-7463 to this issue.


Column 4 of the following table lists the action required to

remediate the vulnerability in each release, if a solution is


VMware  Product Running             Replace with/

Product Version on       Severity    Apply Patch*        Workaround

======= ======= =======  ========   =============        ==========

ESXi     6.5    ESXi

N/A        not affected           N/A

ESXi     6.0    ESXi

Important  ESXi600-201611102-SG   None

ESXi     5.5    ESXi

Important  ESXi550-201612102-SG   None

*The fling version which resolves this issue is 1.13.0.


  1. Solution

Please review the patch/release notes for your product and

version and verify the checksum of your downloaded file.


ESXi 6.0





ESXi 5.5





  1. References


Disable the “This host currently has no management network redundancy” message

Let’s go over how to disable the “This host currently has no management network redundancy” message.  It’s annoying and we can get rid of the yellow triangles that show on the hosts due to this message.  And I know, you “should” have redundancy on your management network but we’re just not worried about it.  Our hosts are in our building and not at a co-lo so we have constant access to them in the event something happens and we need access.

Management Network Redundancy WarningSince we don’t care about this warning, I wanted to hide it.  This way we can see if there are actual errors on the host and not some warning about network redundancy.  The fix is done with an advanced option in the cluster properties. In the cluster properties, under vSphere HA, select Advanced Options.  Then add an option named das.ignoreRedundantNetWarning and set the Value to true.

ignoreRedundantNetWarningAnd that’s it! Once the option is in, go to each host and reconfigure for vSphere HA.  The warning will then disappear and your vCenter will look clean again.

Update Manager fails to scan ESXi host

I had an issue when using update manager to scan a host and it would fail with a “Check log for errors blah blah blah”.  The errors are never useful.  After investigating and checking the log, I found the following entry:

2015-06-02T20:57:30.312-07:00 [00504 info ‘VcIntegrity’] Error on logout (ignored): vim.fault.NotAuthenticated

When researching the issue, I came across a best practices guide that basically pointed out the error.  So as a general best practice, verify on your hosts, on the configuration tab, under DNS and Routing, that you have filled in all the fields. I had missed the Search Domain field on that particular host which was causing the scan to fail.

DNS and RoutingThe fields with the blue boxes will cause the Update Manager scan failure as well as cause other issues with DNS on the host.  Just another one of things to check!

As always, leave questions or suggestions in the comments.


ESXi 5.1 Host out of sync with VDS

A recent issue I was having was that our ESXi 5.1 hosts would go “out of sync” with the VDS.  The only fix that would work was rebooting the host.  After digging into the log file, I discovered that the host was failing to get state information from the VDS. The entries are below:

value = “Failed to get DVS state from vmkernel Status (bad0014)= Out of memory”,
          message = “Operation failed, diagnostics report: Failed to get DVS state from vmkernel Status (bad0014)= Out of memory”

The issue is a bug in the version of 5.1 we were running (Update 2 at the time) and is a memory leak on the host when using E1000 NICs in your VMs.  Because these VMs were created a long time ago, they were defaulted to the E1000.  The fix for this issue is updating to the latest build of ESXi which has a fix for the issue.  And also, don’t use E1000 NICs, always go with VMXNET3.  Problem solved!