Showing posts with label vSphere Upgrade. Show all posts
Showing posts with label vSphere Upgrade. Show all posts

Saturday, April 15, 2017

Upgrading to vSphere 6.5 with NSX already installed

This has been a slow journey: I have so many different moving parts in my lab environment (all the better for testing myriad VMware products) that migrating to vSphere 6.5 was taking forever. First I had to wait for Veeam Backup & Replication to support it (can't live without backups!), then NSX, then I had to decide whether to discard vCloud Director (yes, I'm still using it; it's still a great multitenancy solution) or get my company to give me access to their Service Provider version...

I finally (finally! after over a year of waiting and waiting) got access to the SP version of vCD, so it was time to plan my upgrade...

My environment supports v6.5 from the hardware side; no ancient NICs or other hardware anymore. I was already running Horizon 7, so I had two major systems to upgrade prior to moving vSphere from 6.0U2 to 6.5a:

  • vCloud Director: 5.5.5-->8.0.2-->8.20.0 (two-step upgrade required)
  • NSX: 6.2.2-->6.3.1
There was one hiccup with those upgrades, and I'm sure they may be familiar to people with small labs: the NSX VIBs didn't install without "manual assistance." In short, I had to manually place each host into maintenance mode, kick off the "reinstall" to push the VIBs into the boot block, then restart the host. This wouldn't happen in a larger production cluster, but because mine is a 3-node VSAN cluster, it doesn't automatically/cleanly go into Maintenance Mode.

Moving on...

Some time ago, I switched from an embedded PSC to an external, so I upgraded that first. No problems.

Upgrading the stand-alone vCenter required a couple of tweaks: I uninstalled Update Manager from its server (instead of running the migration assistant: I didn't have anything worth saving), and I reset the console password for the appliance (yes, I'd missed turning off the expiration, and I guess it had expired). Other than those items? Smooth sailing.

With a new vCenter in place, I could use the embedded Update Manager to upgrade the host. I had to tweak some of the 3rd-party drivers to make it compatible, but then I was "off to the races."

After the first host was upgraded, I'd planned on migrating some low-priority VMs to it in order to "burn in" the new host and see if some additional steps would be needed (ie removing VIBs for unneeded drivers that have caused PSODs in other environments I've upgraded). But I couldn't.

Trying to vMotion running machines to the new host, I encountered network errors. "VM requires Network X which is not available". Uh oh.

I also discovered that one of the two DVS (Distributed Virtual Switch) for the host was "out of sync" with vCenter. And no "resync" option that would normally have been there...

Honestly, I flailed around a bit, trying my google fu and experimenting with moving VMs around, both powered-on and off, as well as migrating to different vswitch portgroups. All failing.

Finally, something inspired me to look at my VXLAN status; it came to me after realizing I couldn't ping the vmknic for the VTEPs because they sit on a completely independent IP stack, making it impossible to use vmkping with a VTEP as a source interface.

Bingo!

The command esxcli network vswitch dvs vmware vxlan list resulted in no data for that host, but valid config information for the other hosts.

A quick look at NSX Host Preparation confirmed it, and a quick look at the VIBs on the host nailed it down: esx-vsip and esx-vxlan were still running 6.0.0 versions.

I went back through the process I'd used for upgrading NSX in the first place, and when the host came back up, DVS showed "in sync", NSX showed "green" install status and—most important of all—VMs could vMotion to the host and they'd stay connected!

UPDATE: The trick, it seems, is to allow the NSX Manager an opportunity to install the new VIBs for ESXi v6.5 before taking the host out of maintenance mode. By manually entering Maintenance Mode prior to upgrading, VUM will not take the host out of Maintenance, giving the Manager an opportunity to replace the VIBs. Once the Manager shows all hosts upgraded and green-checked, you can safely remove the host from Maintenance and all networking will work.

Tuesday, September 11, 2012

Upgrading from vSphere 5.0 to 5.1

I upgraded my home lab from VMware vSphere 5.0 U1 to vSphere 5.1 today, using the GA bits that became available late yesterday, 10-September-2012.

vCenter Server

As with all vSphere upgrades, the first task was to upgrade the vCenter Server instance. This is also the first place that you'll see changes from the installs that were familiar from v4, forward.
Install Screen for VMware vCenter Server
The first thing you notice is that two other options are prerequisites to installing vCenter Server: vCenter Single Sign On (SSO) and Inventory Service.

VMware does you the favor of offering an option to "orchestrate" the install of the two items prior to installing vCenterServer, but in doing so, it also keeps you from accessing all of the installation options (like HA-enabled SSO) available in the prerequisite individual installs.

The next hiccup that I encountered was the requirement for the SSO database. Yes, it does support Microsoft SQL, and yes, it can install an instance of Express for you. But if you'd like to use the database instance that's supporting the current vCenter database, you'll have two additional steps to perform, outside of the VMware installer:
1) Run the database creation script (after editing it to use the correct data and logfile locations for the instance)
2) Reset the instance to use a static port.

It is documented as a prerequisite for installing the SSO service that the SQL server requires a static port. But why? Because it relies on JDBC to connect to the database, and JDBC doesn't understand Named Instances, dynamic ports or the Browser Service and SQL Server Resolution Protocol.
Note the lack of options for named instances.
If you did like many folks and used the default install of SQL Express as your database engine, you have the annoyance of needing to stop what you're doing and switch your instance (VIM_SQLEXP, if using the default) to an unused, static port. In order for that setting to take effect, you must restart the SQL Server instance. Which also means your vCenter services will need to be restarted.

Again: this is a documented requirement in the installation guide, yet another reason to read through it before jumping in...

Once you have SSO installed, the Inventory Service is recognized as a upgrade to the existing service, as is the vCenter Server itself. Nothing really new or unique in adding these services, with the exception of providing the master password you set during the SSO installation.

Then the ginormous change: you've probably been installing the Web Client out of habit, but now you really need to do so if you'd like to take advantage of any new vSphere 5.1 features that are exposed to the user/administrator (like shared-nothing vMotion).

But don't stop there! Make sure you install the vCenter Client as well. I don't know which plug-ins for the Client are supposed to be compatible with Web Client, but the ones I use the most—Update Manager and VMware Data Recovery—are still local client only.

That's right: Update Manager 5.1—which is also one of the best ways to install ESXi upgrades for small environments that aren't using AutoDeploy or other advanced provisioning features—can only be managed and operated from the "fat" client.

Finally, one positive note for this upgrade: I didn't have to change out any of my 5.0 license keys for either vCenter Server. As soon as vCenter was running, I jumped into the license admin page and saw that my existing license was consumed for the upgraded server, and no new "5.1" nodes in eval mode were present.

ESXi

Once vCenter is upgraded and running smoothly, the next step is to upgrade your hosts. Again, for small environments (which is essentially 100% of those I come across), Update Manager is the way to go. The 5.1 ISO from VMware is all you need to create an upgrade baseline, and serially remediating hosts is the safest way to roll out your upgrades.

Like the vCenter Server upgrade, those 5.0 license keys are immediately usable in the new host, but with an important distinction: as near as I can tell, those old keys are still "aware" of their "vTax" limitations. It doesn't show in the fat client, but the web client clearly indicates a "Second Usage" and "Second Capacity" with "GB vRAM" as the units.
vRAM limits still visible in old license key: 6 x 96 = 576. 
I can only assume that converting old vSphere 5 to vCloud Suite keys will replace that "Second Capacity" value with "Unlimited" or "N/A"; if you've got a big environment or host Monster VMs, you'll want to get those new keys as soon as possible to eliminate the capacity cap.
The upgrade itself was pretty painless for me. I run my home lab on a pair of Dell PE2950 III hosts, and there weren't any odd/weird/special VIBs or drivers with which to contend.
Update: vCloud Suite keys did NOT eliminate the "Second Capacity" columns; vRAM is still being counted, and the old v5.0 entitlement measures are being displayed.

Virtual Machines

The last thing you get to upgrade is your VMs. As with any upgrade to vSphere (even some updates), VMware Tools becomes outdated, so you'll work in some downtime to upgrade to the latest version. Rumors to contrary, upgrading Tools will require a reboot in Windows guests, at least to get the old version of Tools out of the way.

vSphere 5.1 also comes with a new VM Hardware spec (v9) which you can optionally upgrade to as well. Like previous vHardware upgrades, the subject VM will need to be powered off. Luckily, VMware has added a small bit of automation to this process, allowing you to schedule the upgrade the next time the guest is power-cycled or cleanly shut down.