Tuesday, January 8, 2019

Merry Christmas: Apple Macintosh SE

Christmas, 2018.
My brother has given to me a circa-1989/1990 Apple Macintosh SE HDFD. It's in a "carrying" case, includes an external 800K floppy drive, Apple Desktop Bus keyboard and mouse, power cord, manuals, and System 6 install disks.

The system has 2.5MB RAM, a 20MB SCSI hard drive, and a 1.44MB internal floppy.

2.5MB RAM
20MB Hard drive (with "stuff" on it)
System 6, at your service...
My wife wanted to know what I'd do with it... well, the answer is: play with it.

The first thing I did was look into "useful" upgrades: Network, Memory, Capacity.

I found an Asante MacCon adapter for the SE
I found 4 x 1MB RAM SIMMs for the SE
I found this gizmo: SCSI2SD

DING-DING-DING!

I can work with this.

And then I ran across this: macrepository.org

Wednesday, May 17, 2017

VBR v10 new hotness

Sitting in the general session is not typically the way I'd compose a new post, but I'm pretty stoked by some new, long-desired features announced for the next version of Veeam Backup and Replication (VBR), version 10.

First is the (long awaited) inclusion of physical endpoint backup management via VBR console. We've had Endpoint Backup for a while, which is awesome, and we've been able to use VBR repositories to store backups, but all management was at the endpoint itself. In addition to centralized management, the newest version of the managed endpoint backup (alright, alright... Agent) will support Microsoft Failover Clusters at GA!

Second is the new feature that significantly expands VBR's capability: the ability to backup NAS devices. Technically, it's via SMB or NFS shares, so you could target any share--including one on a supported virtual or physical platform--but the intention is to give great backup & recovery options for organizations that utilize previously-unsupported platforms for NAS, like NetApp, Celera, etc.

Third--and most exciting to me, personally--is the addition of a replication mode utilizing VMware's new "VMware APIs for I/O Filtering" (VAIO). This replication mode uses a snapshot-free capture of VMDK changes on the source, with and the destination being updated on a (configurable, default of 15s) by-the-second interval. This new replication method is branded "Veeam CDP" (Continuous Data Protection). There are competing products on the market that offer similar capability, but Veeam is advertising that they are the first to leverage VAIO while other products are using either undocumented/unsupported APIs, or old APIs intended for physical replication devices.

There are a number of other nice, new features coming--Object storage support, Universal APIs for storage integration, etc.--but these three will be the big, compelling reasons to not only upgrade to Version 10 when it arrives (for current customers) but to upgrade your vSphere environments if you haven't already embraced Version 6.x.

Saturday, April 15, 2017

Upgrading to vSphere 6.5 with NSX already installed

This has been a slow journey: I have so many different moving parts in my lab environment (all the better for testing myriad VMware products) that migrating to vSphere 6.5 was taking forever. First I had to wait for Veeam Backup & Replication to support it (can't live without backups!), then NSX, then I had to decide whether to discard vCloud Director (yes, I'm still using it; it's still a great multitenancy solution) or get my company to give me access to their Service Provider version...

I finally (finally! after over a year of waiting and waiting) got access to the SP version of vCD, so it was time to plan my upgrade...

My environment supports v6.5 from the hardware side; no ancient NICs or other hardware anymore. I was already running Horizon 7, so I had two major systems to upgrade prior to moving vSphere from 6.0U2 to 6.5a:

  • vCloud Director: 5.5.5-->8.0.2-->8.20.0 (two-step upgrade required)
  • NSX: 6.2.2-->6.3.1
There was one hiccup with those upgrades, and I'm sure they may be familiar to people with small labs: the NSX VIBs didn't install without "manual assistance." In short, I had to manually place each host into maintenance mode, kick off the "reinstall" to push the VIBs into the boot block, then restart the host. This wouldn't happen in a larger production cluster, but because mine is a 3-node VSAN cluster, it doesn't automatically/cleanly go into Maintenance Mode.

Moving on...

Some time ago, I switched from an embedded PSC to an external, so I upgraded that first. No problems.

Upgrading the stand-alone vCenter required a couple of tweaks: I uninstalled Update Manager from its server (instead of running the migration assistant: I didn't have anything worth saving), and I reset the console password for the appliance (yes, I'd missed turning off the expiration, and I guess it had expired). Other than those items? Smooth sailing.

With a new vCenter in place, I could use the embedded Update Manager to upgrade the host. I had to tweak some of the 3rd-party drivers to make it compatible, but then I was "off to the races."

After the first host was upgraded, I'd planned on migrating some low-priority VMs to it in order to "burn in" the new host and see if some additional steps would be needed (ie removing VIBs for unneeded drivers that have caused PSODs in other environments I've upgraded). But I couldn't.

Trying to vMotion running machines to the new host, I encountered network errors. "VM requires Network X which is not available". Uh oh.

I also discovered that one of the two DVS (Distributed Virtual Switch) for the host was "out of sync" with vCenter. And no "resync" option that would normally have been there...

Honestly, I flailed around a bit, trying my google fu and experimenting with moving VMs around, both powered-on and off, as well as migrating to different vswitch portgroups. All failing.

Finally, something inspired me to look at my VXLAN status; it came to me after realizing I couldn't ping the vmknic for the VTEPs because they sit on a completely independent IP stack, making it impossible to use vmkping with a VTEP as a source interface.

Bingo!

The command esxcli network vswitch dvs vmware vxlan list resulted in no data for that host, but valid config information for the other hosts.

A quick look at NSX Host Preparation confirmed it, and a quick look at the VIBs on the host nailed it down: esx-vsip and esx-vxlan were still running 6.0.0 versions.

I went back through the process I'd used for upgrading NSX in the first place, and when the host came back up, DVS showed "in sync", NSX showed "green" install status and—most important of all—VMs could vMotion to the host and they'd stay connected!

UPDATE: The trick, it seems, is to allow the NSX Manager an opportunity to install the new VIBs for ESXi v6.5 before taking the host out of maintenance mode. By manually entering Maintenance Mode prior to upgrading, VUM will not take the host out of Maintenance, giving the Manager an opportunity to replace the VIBs. Once the Manager shows all hosts upgraded and green-checked, you can safely remove the host from Maintenance and all networking will work.