Wednesday, February 5, 2014

Homelab, part troix

With VSAN in beta and having a requirement for a three-host minimum, I was getting antsy: how could I test it in real-life scenarios? Sure, I could've spun up a virtual ESXi instance on one of my two hosts, but that would defeat the purpose of duplicating customer scenarios.

So, I once again reached out to my friends at Stallard Technologies, Inc. for a third host to match the two I'd already purchased. Thankfully, they still had my preferred configuration available, and they even had it on sale for the same price I'd paid before. With my original setup designed around those older 2U hosts, I was still using the 2U brackets to hold each of the 1U servers. With the addition of the third server, I simply move it from side-to-side if I need access to the host that's closer to the wall.

At this point, I'd also reached the limit of the ports available on my Cisco SG300-28, so I needed some changes there, too: I wouldn't just buy 1 switch, but I'd instead pick up a pair so I'd have redundancy for the hosts, and I'd uplink the switches to the Cisco as my "core" switch, as well as cross-connect the new switches to prevent either uplink set as a single-point-of-failure. Given the costs and the number of ports I was planning, SG300 was out of the running due to cost considerations. Based on recommendations, I chose a pair of HP V1910-24G; together, they essentially cost the same as the SG300-28 did.

I again turned to the StarTech equipment brackets, this time for the 3U model to accommodate the three switches.
Network Central
The addition of all those switches were nice, but I was starting to get worried by all those devices with single power supplies: sure, I could plug them all into a UPS, but what happens when the UPS quits? I searched for and found a working automatic transfer switch (ATS) on Ebay for cheap, and added that to the mix, again using a StarTech 1U bracket to tuck it behind the desk.

With common VSAN configurations being designed around 10Gb/s, I knew that kicking the tires with 1Gb/s connections was going to require some careful design decisions: two connections per host, shared with the other IP storage in my environment was going to be a bit thin. A side-effect of adding more network ports, however, was the ability to also increase the number of pNIC ports in the hosts, so I added another dual-port Intel adapter to bring the per-host count up to 8 Gigabit ports.

In my current setup, I have three VDS switches:
  1. Management & vMotion (2 x pNIC)
  2. Guest networking (2 x pNIC)
  3. IP Storage (4 x pNIC)
The first and last switches have vmkernel port groups; the IP Storage switch also has a VM portgroup for in-guest access to iSCSI. It looks a little like this:
In this configuration, I get 4x1Gb/s "pinned" connections for VSAN, two "pinned" connections for iSCSI and two vmkernel NICs for independently-addressed NFS shares. Between NIOC and physical load balancing that's available in the VDS switch, I don't think I'll overwhelm the iSCSI with VSAN (or vice-versa).

This configuration also makes it so no single-chip failure can take out an entire block of services: adjacent pNICs are on different assignments, and paired adapters are either on-board or expansion card.

With 9 connections from each host (8 data NICs for the host; 1 management NIC for the hardware), it was time to get a little more serious about cable management. I ended up buying custom colored patch cables from Monoprice, along with a length of "wire loom"; the loom allowed my to bundle the data cables together into a single, tidier bulk cable. As the diagram above shows, the colors were assigned to NICs and each pair was additionally marked with some black electrical tape. All motherboard-based ports were patched into one switch, while the remaining ports were patched into the second. It's now all very tidy and consistent.

When I purchased those original hosts, I didn't get drives with them; I pretty much presumed that I'd boot from SD card and use shared storage for 100% of the workloads.

Enter PernixData FVP: I'd already had a couple of Intel i520-series SSDs from testing the performance of cache in my iomega arrays (conclusion: it doesn't help), so the first disk installed in my hosts were 240GB SSD.

Enter HP StoreVirtual VSA: I've been a long-time fan of the former Lefthand VSA, and after receiving the latest version as a long-term NFR copy as a vExpert, I decided I needed some capacity to do some testing. I searched around and decided that the Seagate Constellation.2 enterprise SATA were the right choice: sure, they were limited by their interface and rotational speed, but they were also backed by a 5y manufacturer's warranty and had a decent price point for 500GB, all things considered. Two more spindles added to each host.

Enter the third host and VSAN: although the hosts already had what I'd need to do testing (SSD & HDD), I didn't want to tear down the other storage, so it was back to the well for more SSD and HDD.
As they stand today, all three hosts have full inventory of disk: 2x 240GB SSD, 4x 500GB HDD.

1 comment:

  1. Hi there, There's no doubt that your site could possibly be having browser compatibility problems. When I look online grocery dubai marina at your web site in Safari, it looks fine however when opening in IE, it has some overlapping issues. I simply wanted to give you a quick heads up! Besides that, wonderful blog!

    ReplyDelete