NSX: Differences of NSX-V (NSX for vSphere) and NSX-T (NSX Datacenter)
While there are a lot of differences between the two underneath, the basic setup remains largely the same. However, know that the Manager and Controllers are now combined. You basically have a manager/controller cluster now instead. Basically 3 VMs instead of 4. The biggest differences start to come into play when you begin deploying components (N-VDS, logical switches, routers, etc.) after these steps.
With both, to start, you deploy the NSX Manager/Controller first. NSX-V was ova only, NSX-T Manager has both ova and qcow2 (for KVM) appliances. Going to focus on vSphere with vCenter available.
After the manager/controller node is deployed is where things diverge significantly.
- Adding vCenter makes management overhead of adding new hosts easier since you've now effectively delegate that job to vCenter.
- With NSX-V this process is very similar except that NSX-V was tightly coupled to a single vCenter and only UI manageable from vSphere Web Client.
- NSX-T can register multiple vCenters as Compute Managers.
- Unlike NSX-V, your initial 'manager' is also a controller.
- Unlike NSX-V, you do this through the Manager UI vs vSphere Web Client.
- Unlike NSX-V, you configure each individually in the wizard (or API), rather than provide an IP pool.
- Just like NSX-V, you'll have to instantiate DRS rules to keep controllers on separate hosts IF you deploy all of them to the same cluster.
- vCenter as the target is currently the only out of the box 'automated' way to deploy controllers.
- KVM qcow2 appliances requires manual intervention.
- VIP can be applied to manager/controller cluster. (Built-in or external LB supported)
The key point, you could deploy NSX-T completely stand-alone without any tie to a vCenter. Deploying to a vSphere environment does have its advantages of being 'easier'.
- Your ESXi hosts (as of 6.7) should have a minimum of 2 physical nics
- Initial setup requires that your initial vmkernels (mgmt, vsan, vmotion, etc.) be setup on standard vSwitch or VDS first with one physical nic allocated toward it.
- A secondary physical nic will need to be allocated toward an N-VDS.
- N-VDS is currently only recognized/seen by NSX-T
- You won't see an object in vCenter inventory related to the N-VDS.
- Although you do see it when looking at the physical nic that it is assigned to an N-VDS.
- *You will now see it under virtual switches on host as of 6.7U1 HTML5 Web Client
At this point, I'd still recommend 4 physical nics keeping your underlying vmkernel interfaces on standard or vds switches. Quite simply, it's still fairly new and it starts to become a much more complex system to manage. Lab setup should be fine, but I'd caution going full production w/o proper testing and training on a consolidated N-VDS configuration.
https://www.vmware.com/support/nsxt/doc/nsxt_22_api.html <-- Outdated API Reference
Like this article? Post comments/questions if you'd like to know more.