Tuesday, February 10, 2015

HP StoreVirtual VSA: The Gold Standard

HP has owned the Left Hand storage system since late 2008, and has made steady improvements since then. The product had already been officially supported on a VM; not only did the acquisition not destroy that option, but HP has embraced the product as a cornerstone of their "software-defined storage" marketing message.

Although other products existed back in 2008, a virtualized Left Hand node was one of the first virtual storage appliances (VSA) available with support for production workloads.

Fast-forward to August, 2012: HP elects to rebrand the Left Hand product as StoreVirtual, renaming the SAN/iQ to LeftHand OS in order to preserve its heritage. The 10.0 version update was tied to the rebranding, and the VSA arm of the portfolio—HP never stopped producing "bare-metal" arrays based on their 2U DL380 server chassis—promised to bring additional enhancements like increased capacity (10TB instead of 2TB) and better performance (2 vCPUs instead of 1) along with price drops.

The 11.0 version was released with even more features (11.5 is the production/shipping version for both bare-metal and VSA), chief of which—in my opinion—is Adaptive Optimization (AO), the ability for node-attached storage to be characterized in one of two tiers.

Note that this isn't a Flash/SSD-specific feature! Yes, it works with solid state as one of the tiers—and is the preferred architecture—but any two performance-vs-capacity tiers can be configured for a node: a pair of 15K RPM SAS drives as Tier 0 performance with 4-8 NL SAS drives as Tier 1 capacity is just as legitimate. HP cautions the architect, however, not to mix nodes with varying AO characteristics in the same way it cautions against mixing single-tier nodes in one cluster.

Personally, I've played with the StoreVirtual VSA off and on over the years. The original hold-back for getting deeply into it was the trial duration: 30 to 60 days is insufficient to "live with" a product and really get to know it. In early 2013, however, HP offered NFR licensing to qualified members of the VMware vExpert community, and those licenses had year-long duration associated with them.

Unfortunately, however, the hosts I was running at home were pretty unsuited to supporting the VSA: limited RAM and 2-4 grossly inferior desktop-class SATA hard drives in each of 2 hosts. I'd still load up the VSA for test purposes; not for performance, but to understand the LeftHand OS better and how failures are handled, configurations are managed, and how the product interacts with other software like Veeam Backup & Recovery. But then I'd also tear down the cluster when I finished with it in order to regain consumed resources.

When PernixData FVP was still in pre-GA beta, I was able to make some system upgrades to add SSD to newer hosts—still with essentially zero local capacity, however—and was able to prove to myself that a) solid state works very effectively at increasing the performance of storage and b) there is a place for storage in the local host.

With the release of the first VMware Virtual SAN beta, I decided it was time to make some additional investments into my lab, and I was able to not only add a third host (the minimum for supported VSAN deployment) but also provision them all with a second SSD and enterprise SATA disks for the experiment. In that configuration, I was able to use one SSD for iSCSI-based performance acceleration (using the now-GA FVP product) and a second SSD for VSAN's solid state tier. My hosts remained limited in the number of "spinning disk" drives that could be installed (four), but in aggregate across three hosts, the total seemed not only reasonable, but seemed to work in practice.

Unfortunately, I was plagued by hardware issues in this configuration: rarely a week went by without either FVP or VSAN complaining about a drive going offline or being in "permanent failure," and it seemed like the weeks when that didn't occur, the Profile Driven Storage service of vCenter—which is critical to making use of VSAN in other products like vCloud Director or Horizon View—would need to be restarted. Getting FVP or VSAN working correctly would usually require rebooting the host reporting an issue; in some cases, VMs would need to be evacuated from VSAN to provide the necessary free space to retain "availability."

In short, the lab environment with my 1Gbps networking and consumer-grade disk & HBA made VSAN and FVP a little too much work.

But I still had that VSA license... If I could get a better HBA—one that would perform true hardware-based RAID and have deeper queue, not to mention other enterprise SATA/SAS capabilities—I'd be able to leverage the existing disk investment with the VSA and have a better experience.

I was able to source a set of Dell PERC H700 adapters, cables and cache batteries from eBay; these were pulled from R610 systems, so dropping them into mine was trivial and the set cost considerably less than a single kit from Dell. Although I could have rebuilt the VSAN and FVP environments on the new HBA—each disk in the system would need to be set up as a single-spindle RAID0 'virtual volume'—I went with a RAID1 set for the pair of SSD and a RAID5 for the spindles. I would be able to continue leveraging PernixData for acceleration using the RAM-backed function, but I was done messing with VSAN for now.

Setting up the v11.5 VSA initially gave me pause: I was booting from SD card, so I could use 100% of the SSD/HDD for it, but how to do it? If the LeftHand OS had drivers for the PERC array—possible: the core silicon of the H700 is a LSI/Symbios product which might be supported in spite of being a Dell OEM—I could do a DirectPath I/O if there was another datastore available on which to run the VSA. A second, similar alternative would be to manually create Physical RDM mappings for the RAID volumes, but that still left the problem of a datastore for the VSA. Yes, I could run the VSA on another array, but if the host ever had issues with that array, then I'd also end up with issues on my LeftHand cluster—not a good idea!

My final solution is a hybrid: The HDD-based RAID group is formatted as a VMFS5 datastore, and the VSA is the only VM using it. A large, 1.25TB 'traditional' VMDK is presented using the same datastore (leaving ~100GB free for the VSA boot drive and files); the SSD-based RAID group is presented as Physical RDM. This configuration permitted me to enable AO on each node, and get an SSD performance boost along with some deep storage from the collection of drives across all three nodes.

In practice, this array has been more trouble-free than my VSAN implementation on (essentially) identical hardware. A key difference, however, has been the performance with respect to inter-node communication: With VSAN, up to four interfaces can be simultaneously configured for inter-node communication, increasing bandwidth and lowering latency. Even with the lower performance characteristics of the disks and HBA in each host, saturating two of the four gigabit interconnects I had configured was possible with VSAN (when performing sequential reads & writes, eg, backups & storage vMotion), so the single gigabit connection available to VSA was very noticeable.

I have since migrated my network environment to use 10Gbps Ethernet for my back-haul network connectivity (iSCSI, NAS, vMotion) and have objective evidence of improved performance of the LeftHand array. I'll be updating this post with subjective test results when the opportunity presents itself.

8 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. Legendary guitar rig 5 crack mac at vst-torrents.net and bass amplifiers, modeled with award-winning Dynamic Tube Response Technology. These amps convey the character, charm, and feel of their real-world counterparts. Each amp has its own matched cabinet plus an extra set of alternative cabinets – 27 exquisite models in all. Download stellar data recovery crack at hdlicense.com

    ReplyDelete
  3. It's really strong for you all around all window programming establishment. This site is tangling its article is major and gets. I appreciated and bookmark this site on my chrome. This is where you can get all Windows software for your pc and mac. This site helps you in installing, keygen, patch, serial key, serial number, activation code and so forward.
    https://cracksmob.com/

    ReplyDelete
  4. It's truly solid for you all around all window programming foundation. This site is tangling its article is major and gets. I appreciated and bookmark this site on my chrome. This is the place where you can get all Windows software for your pc and mac. This site helps you in installing, keygen, patch, serial key, serial number, activation code and so forward.
    https://zsactivatorskey.com/

    ReplyDelete