VMware: vSphere Scheduled Tasks w/ PowerCLI (not to be confused w/ Windows scheduled tasks)


Summary:
Question was posted in the communities on how to find scheduled tasks configured against a VM.  I remembered doing it long ago, but I never posted about it.  Also found it weirdly hard to find via Google, so I'm posting here for my own reference or anyone else needing it for that matter.

Example:

VMware: Migrating Management(Mgmt) vmk to DVS/VDS fails when moving both vmnic and vmk at the same time.

Summary:
Quite simple, had a script to move physical nics to DVS/VDS w/ management vmk at the same time.  Typically this works w/o issue, but for some reason kept failing.  The answer was dead simple...

Resolution/Workaround:
  1. Spanning Tree Enabled?
    • Enable portfast on the switch ports.
  • Or
  1. Spanning Tree not available?
    1. Move one physical link at a time (assuming more than one physical link available)
    2. Wait for uplink on DVS to come online, then move management/mgmt vmk
Explanation:
Basically, the switch ports that the ESXi servers were uplinked to did not have 'portfast' (physical switchside config) enabled.  Without 'portfast', when moving a physical nic from a standard vSwitch (or vice versa), there is a negotiation downtime the host incurs as the switch/host essentially renegotiates the connectivity.  It's a short window (5-10 sec) that the port goes 'offline', but it's enough for the migration of vmk and physical nics at the same time to fail.

Example PowerCLI Snippet:

VMware: vSAN 6.6 not showing all available disks when attempting to claim...

Summary:
Was going through and attempting to setup new vSAN cluster but noticed that the wizard was only showing 3 of 4 disks from 3 of 4 hosts and 0 disks from another host.  This appears to be by design where the setup wizard will only target disks that have 0 partitions.  Makes sense.

This, however, is not obvious in the setup.

Solution:
Simply delete any partitions from those disks that you'd like to have vSAN claim.  You can do this enmasse via PowerCLI or the Web Client interface (as pictured below).
[Warning: This is a destructive process so be sure that you know absolutely for certain that you are targeting the correct storage devices.  This is especially true if you plan to script this process.]


Erase Partition in Web Client
The above process would suck if you were doing it against a large cluster, so learn to do it in powershell or some other automated method.

PowerCLI Method:
$TCluster = Get-Cluster TargetClusterName
$TVMHosts = $TCluster | Get-VMHost | Get-View
Foreach ($VMHost in $TVMHosts)
{
    $ConfigManager = Get-View $VMHost.ConfigManager.StorageSystem
    #Spec defined and left blank to clear partitions
    $Spec = New-Object vmware.vim.hostdiskpartitionspec
    #I'm simply targeting all naa devices and those that state local disk. 
    #Reality is that you'd probably want a more in depth filter on the devices you target. 
    #My case was a new set of hosts, so this worked for me.
    $TargetDisks = $VMHost.config.StorageDevice.scsilun | Where {$_.DevicePath -match "naa." -and $_.LocalDisk -eq "true"}
    Foreach ($Disk in $TargetDisks)
    {
        $ConfigManager.UpdateDiskPartitions($Disk.DevicePath, $Spec)
    }
}
Side Note:
vSAN claimed disks have a protection mechanism against being erased via above defined method.  Any partitions that it runs into claimed by vSAN will be met w/ an exception of "Cannot change the host configuration"
If you for some reason need to delete those partitions, then you'll like have to try this method:

vSAN: Rebuilding an ESXi host that has vSAN claimed disks...

VMware/Security: Opvizor OpBot, cool, but scary too.

I've posted about OpBot in the past w/ a brief overview on how you can setup and deploy.  It's a very cool and immensely useful tool.  However, I must balance this with security.  Responsibly deployed, it can be a very useful tool.  However, there is a dark side to this from a security management perspective.  It also poses the very real risk for allowing generic internet access from within your datacenter.

First off, OpBot from Opvizor makes it very clear that you should only grant it's integration account read-only access.  You can do 'destructive' PowerCLI commands by passing login info via slack, but also not recommended.  As much as they have created an immensely useful tool, it also is somewhat of a pandora's box.  It's brought to light a security hole that can be difficult to secure at scale.  Currently Opvizor is the only one that I know of that makes this type of appliance, but that doesn't stop the many possible clones of this type of tech.

Basically what's happened is that it's a method in which a malicious VMware admin could deploy said appliance, give it an elevated service account (AD or otherwise) and no one would be any the wiser.  Now to be clear, a VMware admin should never be deploying things into a datacenter w/o a proper change/audit control process.  In the very least, anything deployed should be well documented and known.

NSX helps in this aspect w/ micro-segmentation.  Everything placed into service receives a specific policy and can communicate w/ only what is needed.  However, it'll only help as far as the security is implemented.  If complete outbound internet is open as a 'standard', then you've effectively enabled OpBot or things like it unfettered access.  First knee-jerk reaction is likely blocking Slack connectivity unless specifically enabled for said purpose.  However, this only guarantees to a "Slack", this does not protect from slack clones or the like.

Solution?:
It's not super simple, but here are some thoughts (for VMware solutions specifically):
  1. Audit/Change Control over Identity Management System (Active Directory) and whatnot.
    1. Any new service/shared account created should be immensely scrutinized.
    2. Change Auditor is a pretty good tool for this.
  2. Audit/Change Control to "Roles" in vCenter (Log Insight can help somewhat in this aspect, Hytrust CloudControl would give you a workflow engine in addition to audit capabilities.)
    1. Basically any account granted an 'admin-type' role should be alerted upon w/o an a peer-reviewed change control system.
    2. Any new role implemented should also be scrutinized for scope and alerting/monitoring put in place for 'high-risk' type roles.
    3. Any change to role permissions scrutinized as well.
  3. Audit/Change Control over passwords for 'service/shared' accounts. (Hytrust Cloud Control includes password vaulting for ESXi hosts)
    1. Password Repo such as LastPass/1Password/OneIdentity, etc.
    2. No single or group of people should actually EVER know by memory service/shared account passwords.
    3. Passwords should be changed based upon audit of password repo access when an employee leaves the company.
      • This would hopefully mitigate a time-consuming process of changing all passwords that said employee may or may not have used.
    4. Password Repo should have complete audit trail as well as alerts for specific types of access.
      1. More advanced, you could use the password repo system to change passwords automatically after a 'manual' checkout scenario.
      2. HyTrust does this for ESXi root passwords automatically.
  4. Network Security/Audit/Change Control (Palo Alto App ID Security)
    1. Subscribe to the mantra of trust nothing in or out.
    2. Peer Review all changes.
    3. Access to vCenter via NSX security policies audit/change workflow.
      1. Anything allowed access to vCenter should be audited.
    4. Palo Alto Firewalls can add an extra layer of heuristics type security to block anything not defined as allowable outside of just ports using something like app id.
Minimally, HyTrust CloudControl could mitigate a large amount of risk for a Slack type bot by using its workflow engine, however none of this really matters if you don't have a proper process behind it.  It may also not mitigate proper Identity Management controls.

Bottom Line:
This is a trust problem, however, this is why security, auditing, and change control processes are essential.  It's not a matter of simply disallowing useful tools, such as Opbot, for the sake of security.  It's about being smart and 'knowing' what's happening in your environment so you can implement productive tools to move the business forward all while being secure and safe.

Visual Aid:

VMware: Invalid Configuration for device # when deploying OVF/OVA...

Summary:
Ran into this message when attempting to import an OVF/OVA to vCenter via Web Client from a Mac.  Not all OVA/OVF's have this issue.

Workaround(s):
  • Upload and deploy from a Windows system
    • OR
  • Upload and deploy to a local datastore if available.
    • OR
  • Use OVFTool to deploy
    • Example:
      • ovftool -ds=NameofTargetDatastore -n=NameYouWantVMtoBe --acceptAllEulas --net:bridged=NameofDVSorStdPortGroupYouWantVMattachedTo C:\Path\Turbonomic.ova vi://username%40mysubdomain.myrootdomain.suffix@vCenterNameorIP/virtualDatacenterName/host/ClusterName
        • %40 translates the @ symbol for the OVFTool if you need to authenticate using standard AD UPN or SSO domain user.
        • If Linux/Mac, replace C:\Path\Turbonomic.ova with /Path/Your.ova
        • -net:bridged switch is optional and can also be different depending on how the OVF has that parameter defined.
        • Target is Cluster assumes DRS enabled, go one further down and put hostname after cluster if DRS is not available.
    • OR
  • Use Import-vApp cmdlet from PowerCLI
    • Example:
      • $OVAConfig = Get-OVFConfiguration C:\Path\Turbonomic.ova
        • $OVAConfig.NetworkMapping.NAT.Config = "NameofVMPortGroup"
          • This particular setting is VERY specific to the Turbonomic OVA.  Other OVA's may have several other configurations/properties you may need to provide.
      • $TargetCluster = Get-Cluster NameofCluster
      • Import-vApp -Source C:\Path\Turbonomic.ova -VMHost ($TargetCluster | Get-VMHost | Select -First 1) -Datastore ($TargetCluster | Get-Datastore NameofDatastoreYouWant) -Name NameYouWantRegistered -OVFConfiguration $OVAConfig
        • This cmdlet requires a vmhost target, this example shows how you can target a cluster and have it deploy to first host in the cluster.
        • This demonstrates how you can target a datastore that belongs to the cluster you are targeting for deployment by name.
        • Name you want the OVA to be.
        • OVF Configuration specified to be passed.
Details:
Specifically ran into this deploying to a datastore backed by a FC array.  It ONLY fails when attempting to deploy from MacOS to an FC backed datastore using the web client.  Targeting a locally backed datastore worked fine.  I could deploy just fine from a Windows systems to that same FC backed datastore.  Seems to be a bug w/ Mac VMware Client Integration Plugin at least w/ 6.0 version.