Mac: Background flashes, Dock not showing up...

Not sure what caused this as I hadn't done anything unusual w/ my Mac for awhile.  Long story short, some preference in my user profile was causing this issue.

In particular, the thought it was my dock preferences, but I had to delete my everything under my user's ~/Library/Preferences to get my account back and working properly.  You should have only to do the following:

Delete ~/Library/Application Support/Dock
Delete ~/Library/Preferences/com.apple.dock.plist

Here is screenshot by screenshot on how to do this:

Interview w/ William Lam on AutoTrader.com Mac Mini vSAN (MacCloud)

Enjoyed my conversation w/ William talking about my ghetto MacCloud setup.  You can read the interview here:
http://www.virtuallyghetto.com/2014/08/community-stories-of-vmware-apple-os-x-in-production-part-4.html

Some things people have asked me for clarification:

  1. We are using Mac Mini 'Server' versions.  These have two drives by default.
  2. You can get a kit to add a second drive to the standard mac mini version.
  3. Mgmt and VM traffic flows over Standard vSwitch0 (Onboard as Uplink)
  4. vSAN and vMotion flows over dVS (Thunderbolt as Uplink)
  5. Onboard 1Gb and Thunderbolt 1Gb adapter
  6. Booting to USB thumb drive plugged into the back of the Mac Mini w/ ESXi.
  7. vCenter is a vCSA built and running on another vCenter instance.
 Here is what it looks like configured:


Converged Networking Perils...

Summary:
Had a wonderful experience where a P2V VM w/ bonded NIC's brought down several of our ESXi hosts.  HA compounded the problem by powering up the VM on other hosts once the host w/ this VM was brought down.  The perils of converged networking and why it's important to keep your ESXi management/storage separate from your other physical ports.  If these were 'physically' separate, the problem would have isolated to one host and prevented the cascading HA events.

Here is the config in short:
Dell Blade two nPar'd 10Gb ports --> Internal Dell I/O aggregator ports --> External Dell I/O aggregator ports --> Nexus 5K

Management, vMotion, NFS, AND VM traffic go over these two ports.

One port goes over Fabric A, the other over Fabric B.  Two physically separate uplinks.

What happened:
VM w/ bonded NIC's comes online.  This seemed to cause a 'spanning-tree' like event which caused the Internal Dell I/O aggregator ports to go into an 'error-disable' like state.  I say like because neither of these functions are in the Dell IOA's

Looking @ the Dell I/O aggregator internal ports attached to the blade, we saw something like this:

  • Port     Description  Status Speed     Duplex Vlan 
  • Te 1/12               Up     10000 Mbit Full   —
Normal state should show a list of VLAN's available, not just dashes.  Like so:

  • Port     Description  Status Speed     Duplex Vlan 
  • Te 1/12               Up     10000 Mbit Full   1,31,42,69

Workaround:
Waiting on Dell to see why the IOA reacted as it did.  In the meantime, we've moved management, NFS, and vMotion to Fabric B while leaving VM networking running over Fabric A.

This way the problem will keep the ESXi and VM's running, but only disconnect their network activity should this ugly issue rear it's head again.

Below is quick snippet I wrote up to reconnect several VM's network connections due to the issues that occurred above.

Script Snippet to reconnect several VM's:
$ClusterVMs = Get-Cluster MyClusterName | Get-VM

$Problems = $ClusterVMs | where {$_.powerstate -eq "poweredon"} | get-networkadapter | where {$_.ConnectionState.connected -ne $True}

$Problems | set-networkadapter -connected:$True

#freeITBM VMware ITBM Free? (Opinion)

So lately there has been more discussion around the office whether we should move workloads to the 'cloud'?  AWS being the obvious 800lb gorilla.  I recently attended an AWS Essentials training and came out of it really impressed w/ their offering.  So much so, I thought, 'yeah, it might be time to diversify out of my VMware only mindset.'

That being said, 'cost' is a huge factor.  Not to mention security and a slew of other things, but we'll focus on cost being the topic.  How in the world do you calculate cost?  VMware had Chargeback, but that tool was a pain and quite frankly useless.  Now they have ITBM which is a very simplified tool @ it's core, but has some pretty impressive capabilities.

Amazon has a calculator, but honestly, I feel like that it is more than likely skewed in favor of AWS.  So this leads me to the idea that VMware needs to take the "Progressive" approach of 'compare' our prices to our competitors and choose what's best for you using "Actual Data".  ITBM Standard should be free and open even for the vCHS service.

I can only see this benefiting VMware's image as a transparent entity in the cloud wars that helps businesses make the most cost-effective decision.  Even if it's not VMware.  Also giving this tool to the already entrenched VMware administrators/engineers/vExperts @ no cost can only empower them to show how cost-effective VMware is to the business.

If you agree, make your thoughts known.

Twitter HashTag: #FREEITBM