PreRequisites (per XC630 1U system):
- 2x 10Gb Ports <-- Trunk Ports
- 1x iDrac Port <-- This is for your out of band management.
- We get these DHCP enabled by default so we can access them the minute their connected.
- Typically enabled by default on modern switches
- This enables the Nutanix Controller VM's discover each other immediately.
- You'll need to attach a device physically to that switch
- Or a VM to that switch to start configuration.
- This will allow you to setup via a snazzy web interface.
- This can be done by logging into each ESXi hosts' shell to ssh into each CVM's local network connection attached to the vSwitchNutanix Interface.
- This can be done because Nutanix has vmk interface already created on the same local network.
- Nutanix has an advanced setup doc on their support portal to walk you through the manual cluster creation process. Although a bit difficult to locate, in my opinion.
- Log into the controller VM's on each host and assign them an IP address on ESXi's management network so they can discover each other.
In our case, ESXi is currently our preferred hypervisor, so that's what we received.
|Front of Dell XC630's|
|Back of Dell XC630's. It's a lab, so yeah, it's a mess. STOP LAUGHING!|
Enabling IPv6 Link Local is highly recommended because, adding new nodes becomes a cake walk. Prism will automatically detect new ESXi hosts added and introduce them into the cluster. So if anything to take away, IPv6 link local is a must.
Next, I'll dive into Prism and give my opinion/reflection on that.