Tuesday, November 8, 2016

Virtual SAN Cache Device upgrade

Replacing/Upgrading the cache+buffer device in VSAN

Dilemma: I've got a VSAN cluster at home, and I decided to switch from single diskgroups-per-host to dual to give myself a bit more availability as well as additional buffer capacity (with all-flash, there's not much need for a read cache).

My scenario has some unique challenges for this transformation. First, although I already have the new buffer device to head the new disk group, I don't actually have all the new capacity disks that I'll need for the final configuration: I'll need to use some of the existing capacity disks if I want to get the second disk group going before I have the additional capacity devices. Second, I have insufficient capacity in the remainder of the VSAN datastore to perform a full evacuation while still maintaining policy compliance (which is sort of why I'm looking to add capacity in addition to splitting the one disk group up).


The nominal way to perform my transformation is:
  1. Put the host into maintenance mode, evacuating all registered VMs
  2. Delete the disk group, evacuating the data so all VMs remain storage policy-compliant.
  3. Add the new device
  4. Rebuild disk group(s)
I already took a maintenance outage during the last patch updates and added my new cache+buffer device to each host, so "Step 3" is already completed.
And then I hit on something: While removing the buffer device from a diskgroup will cause the decommissioning of the entire disk group, individual capacity devices can be removed without upsetting more than the objects being stored on that device alone. I have sufficient capacity in the remainder of the disk group—not to mention on the other hosts in the cluster—to operate on individual capacity elements.

So, here's my alternative process:

  1. Remove one capacity device from its disk group with full migration

  2. Add the capacity device to the new disk group.

It takes longer because I'm doing the evacuation and reconfiguration "in series" rather than "in parallel," but it leaves me with more active & nominal capacity+availability than doing it on an entire diskgroup at once.

My hosts will ultimately have two disk groups, but they'll break one "rule of thumb" by being internally asymmetric: My buffer devices are 400GB and 800GB NVMe cards, respectively, so when I'm fully populated with ten (10) 512GB capacity disks in each host, four (4) will be grouped with the smaller and six (6) will be grouped with the larger. When you keep in mind that Virtual SAN won't use more than 600GB of a cache+buffer device regardless of its size, it actually has some internal symmetry: each capacity disk will be (roughly) associated with 100GB of buffer, for a ~5:1 buffer:capacity ratio.

CLI alternative

Although this entire process can be performed using the Web Client, an alternative is to write a CLI script. The commands needed are all in the esxcli storage or vsan namespaces; combined with some shell/PowerShell scripting, it is conceivable that one could:
  • Identify storage devices.
    esxcli storage core device list
  • Identify any existing disk group, cache+buffer and capacity devices
    esxcli vsan storage list.
  •  Remove one of the capacity disks with migration
    esxcli vsan storage remove -d <device> -m evacuateAllData
  • Create a new disk group using an available flash device from the core device list as the new group's cache+buffer device, and the recently evacuated device as the capacity device
    esxcli vsan storage add -s <cache+buffer device> -d <device>
  • Loop through the remaining capacity devices, first removing then adding them to the new disk group. The esxcli vsan storage remove command is blocking when run from the ESXi console, so your script should wait for full evacuation and availability before the next step in the script is executed.

3 comments: