Sunday, September 15, 2013

Revisiting the Home Lab

Although my home lab has performed flawlessly (aside from the occasional lost hard drive in the NAS boxes), the cornerstone equipment—two Dell PowerEdge 2950 hosts—were a little dated when acquired and have certainly shown their age as post vSphere 5.0 releases (and their accessory services & features) come out.

In short, they weren't cutting it anymore: not enough RAM, physical NICs that don't support NIOC, limited cores, limited CPU/RAM features for the hypervisor, etc.

But, having "saved my pennies" while working with the 2950s, I have treated myself to new kit.

I could have gone down the path of "Baby Dragon"—and had I more free time to do system builds and support my frankenservers—but instead chose recertified Dell PowerEdge R610s from Stallard in my home town of Kansas City, getting another set of enterprise-class servers with a 1-year warranty from STI to boot.

Of several criteria, I knew I wanted a recent generation of hardware with 96 to 128GB of RAM per host. Rackmount would be a good replacement for the existing hosts, as long as the "ears" on the new hosts would work with my crazy vertical setup. PCIe was a given, but with the enormous capacity available among 3 NAS boxes, local storage wasn't a consideration. Finally, I wanted a system that had an inboard SD card reader so I could boot via Flash.

There are many, many options to fit the bill: I watched eBay and Craig's List, checked with my company's NFR purchase options, and several recommended resellers of reconditioned gear. With STI in the area and the option to take direct delivery (no shipping!!), there just wasn't anything else that fit the bill.

I took delivery of the new servers on Friday, 13-Sept, and set about swapping new for old. After putting the first host into maintenance mode and removing it from storage & DVS, I dismounted it and swapped the 2-port Intel server NIC into the new server.

I then mounted the new server after dropping an 8GB "Class 10" SD card into the internal reader, and booted to the vSphere install CD.

For those of you who have done this, you know how quickly ESXi installs; a little additional configuration for storage and vmkernel nics and I was ready to perform my first vMotion to the new host (what I would normally consider a definitive test of a cluster setup). A final step to run Update Manager against the host to get it fully patched, and I was "In Production" on the first host.

I repeated the steps with my second old/new pair, and even with interruptions around the house, was done with the swap in a handful of hours. (Try THAT with a non-virtualized system!)

New lab specs:

  • Dell PowerEdge R610
  • 96GB RAM (12x8GB)
  • 2x Intel E5540 (Quad-core, 2.53Ghz, Hyperthreading enabled)
  • Dell SAS 6/iR; No internal storage
  • Boot from 8GB SD Card
  • 6x 1Gbps (4x Broadcom 5709, 2x Intel 82571EB)
  • Redundant power
  • iDRAC Enterprise for "headless" operation
Closing Notes:
By choosing to go with Enterprise class equipment, both now an previously, I also have a charitable organization that is quite interested in taking my old 2950s; while I may have out-grown them, these servers would be the first "true" servers they'd ever had. Additionally, the servers remain on the HCL for VMware, so it's quite likely I'll be able to get them to put in VMware rather than trying to do physical server installs on them.

No comments:

Post a Comment