Following the vCenter Operations Manager 5.8.1 installation and deployment guide leads you to a notice that in order for the deployment of the vApp to work properly you must create an IP Pool and associate it to the portgroup where the vApp is to be connected. IP Pools are created at the Datacenter level. After creating the pool and deploying the app I was all set to power up the vApp. At power on an error was returned:
Cannot initialize property ‘vami.netma-sk0.VM_1’. Network has no associated network protocol profile.
Googling this error will lead you to a few places where its mentioned the issue is that you did not create an IP Pool. The problem was I did in fact create this pool. What the issue turned out to be was we have multiple dVswitches for different clusters that have the same portgroup names. Even though I triple checked the correct portgroup where the vApp was located did indeed have an IP Pool associated this did not rectify the error. The fix was to go back to the IP Pool configuration section, right click on the pool, and edit the properties. Once inside go to the associations tab and select all portgroups that have similar names.
Another quick fact about this IP Pool is you do not need to select Enable IP Pool inside of the pool settings. This checkbox option is only necessary if you intend to specify a range of IP’s.
Just a few reminders out there for those looking to upgrade to ESXi 5.5U1 from anything that is not 5.5. Keep in mind with this version that VMware removed drivers for devices that are not on the HCL. This includes a few NICs like Realtek and Marvell and possibly a few SATA controllers. In order to prevent you from this disaster the best way to accomplish the upgrade is using the profile update esxcli command. Details to follow soon!
A new VM was deployed from a template but the VM guest customizations would not complete. The error that was received was:
LaunchDll:Could not load DLL C:\Windows\system32\iesysprep.dll
The template was made with Windows 2008 x64. Before the VM was made into a template it had IE9 installed. IE9 was then uninstalled to downgrade back to IE7. It appears when IE is upgraded some more sysprep steps are added for IE. The removal of IE9 did not remove these extra steps and when sysprep goes to call the .dll it is not present.
The additional sysprep steps were removed from the registry:
Under each of these keys, Cleanup, Generalize, Specialize, delete any value that looks like:
Here are the steps I took to move the vCenter VM from one cluster to a new cluster with new storage. This new cluster also had hosts which were of a different CPU type.
1) Remove any snapshots from the vCenter VM.
2) Clone vCenter server to new cluster and/or storage.
3) Configure new vCenter clone with proper hardware, network, and switch settings.
4) Make note of which ESXi hosts each vCenter server resides on.
5) Power off source vCenter Server – I issued this command while still in the vSphere client and then disconnected from vCenter and connected to the ESXi host that vCenter resides on. I did this to monitor the shutdown. Monitoring the shutdown is optional.
6) Connect to ESXi host where the vCenter clone resides.
7) Power on vCenter clone.
8) Connect to console on new vCenter clone – Logon to the machine, you will most likely need to configure the network settings and/or reboot the machine once more after new hardware is detected (if the host machine is a different architecture).
9) Verify the new vCenter can connect with the vCenter DB and verify you can connect with either the vSphere client or vSphere Web client.
When the new vCenter clone was done configuring new hardware and such it was up and running without any issues. We left the old vCenter machine around for a few days before we removed it.
I was always leery about what constitutes management traffic when going through a kernel port. Well this post from Duncan Epping over at Yellow Bricks pretty much sums it up.
The feature described as “Management traffic” does nothing more than enabling that VMkernel NIC for HA heartbeat traffic.
Much clearer when you put it like that. The vCenter Server Best Practices for Networking touches on this but it’s worded differently.
On ESXi hosts in the cluster, vSphere HA communications, by default, travel over VMkernel networks, except those marked for use with vMotion. If there is only one VMkernel network, vSphere HA shares it with vMotion, if necessary. With ESXi 4.x and ESXi, you must also explicitly enable the Management traffic checkbox for vSphere HA to use this network.