VMware: Migrating Management(Mgmt) vmk to DVS/VDS fails when moving both vmnic and vmk at the same time.
Summary:
Quite simple, had a script to move physical nics to DVS/VDS w/ management vmk at the same time. Typically this works w/o issue, but for some reason kept failing. The answer was dead simple...
Resolution/Workaround:
Basically, the switch ports that the ESXi servers were uplinked to did not have 'portfast' (physical switchside config) enabled. Without 'portfast', when moving a physical nic from a standard vSwitch (or vice versa), there is a negotiation downtime the host incurs as the switch/host essentially renegotiates the connectivity. It's a short window (5-10 sec) that the port goes 'offline', but it's enough for the migration of vmk and physical nics at the same time to fail.
Example PowerCLI Snippet:
Quite simple, had a script to move physical nics to DVS/VDS w/ management vmk at the same time. Typically this works w/o issue, but for some reason kept failing. The answer was dead simple...
Resolution/Workaround:
- Spanning Tree Enabled?
- Enable portfast on the switch ports.
- Or
- Spanning Tree not available?
- Move one physical link at a time (assuming more than one physical link available)
- Wait for uplink on DVS to come online, then move management/mgmt vmk
Basically, the switch ports that the ESXi servers were uplinked to did not have 'portfast' (physical switchside config) enabled. Without 'portfast', when moving a physical nic from a standard vSwitch (or vice versa), there is a negotiation downtime the host incurs as the switch/host essentially renegotiates the connectivity. It's a short window (5-10 sec) that the port goes 'offline', but it's enough for the migration of vmk and physical nics at the same time to fail.
Example PowerCLI Snippet:
Comments