vSphere/PowerCLI: Convert to Virtual Machine is Greyed Out

Summary:
Assuming permissions are correct, this occurred in my environment, but unsure as to why.  Regardless, this is a script you can use to re-register multiple templates to your vCenter's inventory.

It will simply get a list of templates, their folder location, host, etc, remove it from inventory and re-add it back exactly where it was.  This is in relation to KB2037005

vSphere: Beta Program


VMware is opening applications to participate in their vSphere Beta Program to anyone who has 5.5 and/or 6.0 deployed in their environments.  Even if partially.
There are quite a number of expectations so be prepared to really engage w/ VMware:

  • Online acceptance of the Master Software Beta Test Agreement will be required prior to visiting the Private Beta Community
  • Install beta software within 3 days of receiving access to the beta product
  • Provide feedback within the first 4 weeks of the beta program
  • Submit Support Requests for bugs, issues and feature requests
  • Complete surveys and beta test assignments
  • Participate in the private beta discussion forum and conference calls
The obvious and not so obvious benefits are as follows:
  • Receive early access to the vSphere Beta products
  • Interact with the vSphere Beta team consisting of Product Managers, Engineers, Technical Support, and Technical Writers
  • Provide direct input on product functionality, configurability, usability, and performance
  • Provide feedback influencing future products, training, documentation, and services
  • Collaborate with other participants, learn about their use cases, and share advice and learnings
Sign up here:
http://info.vmware.com/content/35853_VMware-vSphere-Beta_Interest?src=vmw_so_vex_cnaka_471

vSAN: Configure an all-flash vSAN using PowerCLI

Script that I'm putting together to configure new all-flash vSAN clusters.  Still a work in progress, I plan on making it into a function once I've worked out the kinks.  Hosting it on gist.github.com so feel free to make suggestions.


vSAN: Rebuilding an ESXi host that has vSAN claimed disks...

Summary:
While configuring my hosts, I ran into various issues.  One host simply decided to stop talking and the hostd service became unstable.  This meant vCenter could not access the ESXi host to manage it.  One issue I had was that my hosts were missing PTR entries, but even w/ that resolved, I was still stuck w/ one host having issues.

Quick Fix (Assumes no data on vSAN disks, use info at your own risk):
Assuming you have vSAN claimed disks, this is how you can clear them up.
  1. Gather your list of disk on the host using this command:
    • ls /vmfs/devices/disks
  2. Ones appended w/ a :1 or 2 are typically your vSAN disks, you can double check using this command:
    • partedUtil getptbl /vmfs/devices/disks/naa.#################
    • Return looks like this:
  3. Once you've determined which ones have those partitions, delete them:
    1. partedUtil delete /vmfs/devices/disks/naa.################# 1
    2. partedUtil delete /vmfs/devices/disks/naa.################# 2
  4. Once all have been deleted, restart services:
    • services.sh restart

Details:
After rebuilding the host from iso, it continued to exhibit issues.  I tried adding it back to vCenter after the rebuild, (mind you I still had vSAN turned on and set to automatic on the cluster), it reached 80% then failed w/ the following error:

A general system error occurred: Unable to push CA certificates and CRLs to host stupidESXihost.mydomain.local

Attempting to login directly via fat client to the box simply provided:

An unknown connection error occurred. (The server could not interpret the client's request. (The remote server returned an error: (503) Server Unavailable.))

After this, I attempted to rebuild the host from iso again, but this time I had turned off vSAN on my cluster object.  Unfortunately, it appears that the damage had been done to the extent that my vSAN disks were still claimed by vSAN which was noted by the # symbol next to my vSAN disks in the install screens.

This appeared to be cause the ESXi host to now simply go into error 503 state even after rebuilding the host from scratch.  I had to actually delete the vSAN claimed disks partitions and restart the services to get the host back into a healthy state.

Helpful Info:
http://www.virtuallyghetto.com/2013/09/additional-steps-required-to-completely.html
*ESXCLI method described by Lam doesn't work in this case because application server is in 503 state, so no API/CLI methods available. 

Nutanix: Role Mapping Quirk



Summary:
Basically was trying to map a set of AD groups to the Cluster Admin role in Nutanix/Prism.  It appears the role mapping config is very literal.  Meaning, putting in a group like this:

GroupA, GroupB

GroupA will work, but members of GroupB will not have access.  This is because of a 'space' after the comma.  Valid input would be:

GroupA,GroupB