NSX: Differences of NSX-V (NSX for vSphere) and NSX-T (NSX Datacenter)


While there are a lot of differences between the two underneath, the basic setup remains largely the same.  However, know that the Manager and Controllers are now combined.  You basically have a manager/controller cluster now instead.  Basically 3 VMs instead of 4.  The biggest differences start to come into play when you begin deploying components (N-VDS, logical switches, routers, etc.) after these steps.

With both, to start, you deploy the NSX Manager/Controller first.  NSX-V was ova only, NSX-T Manager has both ova and qcow2 (for KVM) appliances.  Going to focus on vSphere with vCenter available.

After the manager/controller node is deployed is where things diverge significantly.

NSX-T:
  1. Add "Compute Manager" (as of version 2.3)
    1. vCenter 6.5+
      • Adding vCenter makes management overhead of adding new hosts easier since you've now effectively delegate that job to vCenter.
      • With NSX-V this process is very similar except that NSX-V was tightly coupled to a single vCenter and only UI manageable from vSphere Web Client.
      • NSX-T can register multiple vCenters as Compute Managers.
  2. Deploy Controllers via NSX-T Manager to target vCenter Hosts/Clusters/RPs (as of 2.3)
    1. Just like NSX-V, you can have the Manager deploy the controllers.
      • Unlike NSX-V, your initial 'manager' is also a controller.  
      • Unlike NSX-V, you do this through the Manager UI vs vSphere Web Client.
      • Unlike NSX-V, you configure each individually in the wizard (or API), rather than provide an IP pool.
      • Just like NSX-V, you'll have to instantiate DRS rules to keep controllers on separate hosts IF you deploy all of them to the same cluster.
  3. You DON'T have to deploy NSX-T controller nodes to a vCenter.  They can also be deployed anywhere, but doing so via the Manager UI/API does make things simpler.  
    • vCenter as the target is currently the only out of the box 'automated' way to deploy controllers.  
    • KVM qcow2 appliances requires manual intervention.
    • VIP can be applied to manager/controller cluster.  (Built-in or external LB supported)
The key point, you could deploy NSX-T completely stand-alone without any tie to a vCenter.  Deploying to a vSphere environment does have its advantages of being 'easier'.

Requirements for NSX-T ESXi hosts:
  1. Your ESXi hosts (as of 6.7) should have a minimum of 2 physical nics
    1. Initial setup requires that your initial vmkernels (mgmt, vsan, vmotion, etc.) be setup on standard vSwitch or VDS first with one physical nic allocated toward it.
    2. A secondary physical nic will need to be allocated toward an N-VDS.
      • N-VDS is currently only recognized/seen by NSX-T
        • You won't see an object in vCenter inventory related to the N-VDS.
        • Although you do see it when looking at the physical nic that it is assigned to an N-VDS.  
          • *You will now see it under virtual switches on host as of 6.7U1 HTML5 Web Client
    3. Once your NSX-T environment is configured, you can then migrate your vmkernels to NSX-T Logical Switches by calling the Manager API's or through the GUI.
    4. Once vmkernels are migrated, you can then begin allocating your initial physical nic to your N-VDS to make the host redundant.
At this point, I'd still recommend 4 physical nics keeping your underlying vmkernel interfaces on standard or vds switches.  Quite simply, it's still fairly new and it starts to become a much more complex system to manage.  Lab setup should be fine, but I'd caution going full production w/o proper testing and training on a consolidated N-VDS configuration.
References:

Like this article?  Post comments/questions if you'd like to know more.

Comments

Popular posts from this blog

NSX-T: Release associated invalid node ID from certificate

NSX-T: vCenter and NSX-T Inventory out of Sync (Hosts in vSphere not showing up in NSX-T)

Azure VMware Solution: NSX-T Active/Active T0 Edges...but