Showing posts from February, 2016

vSphere: VUM (Update Manager) had an unknown error.

Summary: There is a KB article about this, basically happens when the metadata zip file is missing.  In my case, it happened when I moved vCenter from one OS version to another.  By way of old VM to new VM. Essentially, I needed to move all my metadata files from my old vCenter that happened to house VUM as well to the new one. Typically if default install, this location is here: C:\VMware\VMware Update Manager\Data The folder in particular is hostupdate and contains the metadata_###### file that the logs refer to.  So if you still have the old server, you can simply copy it back over. Otherwise, your only recourse is to reinstall and clear the VUM database.

vSphere: Big Data Extensions (Also how to increase heap size in vSphere 6)

Summary: Installing BDE from VMware is pretty easy, but there are some requirements that you need to meet prior to deployment. Forward and Reverse DNS lookup records for you BDE appliance. Make sure your ESXi hosts, and vCenters are NTP synced. Anyway, regarding the above error: Certificate does not have a valid chain and is invalid. Assuming both preReqs and any others listed in BDE documentation are met, the only way I've been able to work around this problem is by increasing the vSphere Web Client's max heap size from 2GB to 4GB. This took some work detective work from my TAM, but he found me a way to increase specific services heap size in 6.0.  Here is the line you will need to increase the web client's heap to a size appropriate for your environment that the dynamic sizing may not understand. This is for the vCenter Appliance, but same applies for Windows server. cloudvm-ram-size -C 4096 vsphere-client service vsphere-client restart Here i

PSA: DO NOT UPGRADE from 5.0/5.1 straight to 5.5 U3b

Really VMware!? Here is the KB: [UPDATE: Patch released that should fix this issue: ] Basically, you'll end up w/ some 5.0 hosts that will be overloaded w/ VM's, assuming you used UM to do your updates.  In my case, I had 13 hosts on 5.5 w/ 2 hosts overloaded on 5.0. So here is my workaround to keep VM's up and running w/o rebooting them: Fresh Install ESXi 5.5 U2 on some hosts that were already upgraded to 5.5 U3b In my case, most of my 5.5U3b hosts were empty. Once 5.5 U2 is installed, you should be able to successfully migrate from 5.0 to 5.5U2. Follow that up by migrating from 5.5U2 to your remaining 5.5 U3b hosts. This worked for me and saved my arse.  Hope you don't run into this and I'm sorry for all those previous to me that actually followed that stupid KB. On the flip side, a PERFECT case as

Nutanix: Deploying the Dell XC series

Adventures in deploying the new Dell XC (Nutanix) series systems.  Initial install of a Nutanix based system. PreRequisites (per XC630 1U system): 2x 10Gb Ports <-- Trunk Ports 1x iDrac Port <-- This is for your out of band management. We get these DHCP enabled by default so we can access them the minute their connected. IPv6 Link-Local Enabled on switch (Recommended/Preferred) Typically enabled by default on modern switches This enables the Nutanix Controller VM's discover each other immediately. You'll need to attach a device physically to that switch Or a VM to that switch to start configuration. This will allow you to setup via a snazzy web interface. If IPv6 Link-Local is unavailable  on the switches, then the setup involves logging into each CVM to perform manual cluster creation. This can be done by logging into each ESXi hosts' shell to ssh into each CVM's local network connection attached to the vSwitchNutanix Interface. This c