vSphere: VUM (Update Manager) had an unknown error.

Summary:
There is a KB article about this, basically happens when the metadata zip file is missing.  In my case, it happened when I moved vCenter from one OS version to another.  By way of old VM to new VM.

Essentially, I needed to move all my metadata files from my old vCenter that happened to house VUM as well to the new one.

Typically if default install, this location is here:
C:\VMware\VMware Update Manager\Data

The folder in particular is hostupdate and contains the metadata_###### file that the logs refer to.  So if you still have the old server, you can simply copy it back over.

Otherwise, your only recourse is to reinstall and clear the VUM database.

vSphere: Big Data Extensions (Also how to increase heap size in vSphere 6)


Summary:
Installing BDE from VMware is pretty easy, but there are some requirements that you need to meet prior to deployment.
  1. Forward and Reverse DNS lookup records for you BDE appliance.
  2. Make sure your ESXi hosts, and vCenters are NTP synced.
Anyway, regarding the above error: Certificate does not have a valid chain and is invalid.

Assuming both preReqs and any others listed in BDE documentation are met, the only way I've been able to work around this problem is by increasing the vSphere Web Client's max heap size from 2GB to 4GB.

This took some work detective work from my TAM, but he found me a way to increase specific services heap size in 6.0.  Here is the line you will need to increase the web client's heap to a size appropriate for your environment that the dynamic sizing may not understand.

This is for the vCenter Appliance, but same applies for Windows server.
cloudvm-ram-size -C 4096 vsphere-client
service vsphere-client restart

Here is the doc, where this nugget is hidden:
http://www.vmware.com/files/pdf/techpaper/VMware-PerfBest-Practices-vSphere6-0.pdf
Page 61 to be exact.


PSA: DO NOT UPGRADE from 5.0/5.1 straight to 5.5 U3b

Really VMware!?
Here is the KB: https://kb.vmware.com/kb/2143943
[UPDATE: Patch released that should fix this issue: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144357]
Basically, you'll end up w/ some 5.0 hosts that will be overloaded w/ VM's, assuming you used UM to do your updates.  In my case, I had 13 hosts on 5.5 w/ 2 hosts overloaded on 5.0.

So here is my workaround to keep VM's up and running w/o rebooting them:

  1. Fresh Install ESXi 5.5 U2 on some hosts that were already upgraded to 5.5 U3b
    1. In my case, most of my 5.5U3b hosts were empty.
  2. Once 5.5 U2 is installed, you should be able to successfully migrate from 5.0 to 5.5U2.
  3. Follow that up by migrating from 5.5U2 to your remaining 5.5 U3b hosts.

This worked for me and saved my arse.  Hope you don't run into this and I'm sorry for all those previous to me that actually followed that stupid KB.

On the flip side, a PERFECT case as to why you might want to implement stateless caching for your ESXi hosts.  If I'm thinking of it correctly, should have been an easy way for me to swap versions.  Will need to explore that more.

Exact Error:
Error when attempting to vMotion, error
Migration to host <> failed with error Already disconnected (195887150). 
vMotion migration [-1062717213:1455646378729101] failed writing stream completion: Already disconnected
vMotion migration [-1062717213:1455646378729101] failed to flush stream buffer: Already disconnected
vMotion migration [-1062717213:1455646378729101] socket connected returned: Already disconnected

Nutanix: Deploying the Dell XC series

Adventures in deploying the new Dell XC (Nutanix) series systems.  Initial install of a Nutanix based system.

PreRequisites (per XC630 1U system):
  1. 2x 10Gb Ports <-- Trunk Ports
  2. 1x iDrac Port <-- This is for your out of band management.
    • We get these DHCP enabled by default so we can access them the minute their connected.
  3. IPv6 Link-Local Enabled on switch (Recommended/Preferred)
    • Typically enabled by default on modern switches
    • This enables the Nutanix Controller VM's discover each other immediately.
    • You'll need to attach a device physically to that switch
      • Or a VM to that switch to start configuration.
    • This will allow you to setup via a snazzy web interface.
  4. If IPv6 Link-Local is unavailable on the switches, then the setup involves logging into each CVM to perform manual cluster creation.
    • This can be done by logging into each ESXi hosts' shell to ssh into each CVM's local network connection attached to the vSwitchNutanix Interface.
      • This can be done because Nutanix has vmk interface already created on the same local network.
      • Nutanix has an advanced setup doc on their support portal to walk you through the manual cluster creation process.  Although a bit difficult to locate, in my opinion.
    • Log into the controller VM's on each host and assign them an IP address on ESXi's management network so they can discover each other.
Dell will typically ship these out ESXi pre-installed, although Nutanix' Acropolis and Hyper-V are options as well.

In our case, ESXi is currently our preferred hypervisor, so that's what we received.
Front of Dell XC630's

Back of Dell XC630's.  It's a lab, so yeah, it's a mess.  STOP LAUGHING!
Setup is relatively easy assuming you have all the PreReqs in place.  We did not have IPv6 link-local enabled, so setup was a bit more cumbersome than I would have liked.  Once all setup though, this Nutanix system is one sexy beast.

Enabling IPv6 Link Local is highly recommended because, adding new nodes becomes a cake walk.  Prism will automatically detect new ESXi hosts added and introduce them into the cluster.  So if anything to take away, IPv6 link local is a must.

Next, I'll dive into Prism and give my opinion/reflection on that.