Thursday, May 23, 2013

Wednesday, May 1, 2013

The "home lab"

Back in 1998, I took an old tower system and put a copy of NetWare 5 on it, then connected a modem and shoved it under a desk. I connected it to two other machines using discarded 10Base-2 network adapters,  RG-59 cable and a couple of terminating resistors. That version was able to do on-demand dialup to an ISP, could share that connection with connected clients, and when you stuck with IPX for inter-machine communication, was immune from most worms or viruses that spread by NetBIOS shares.

No, that wasn't a work or customer environment, it was my home network.

It wasn't too much longer that I joined the beta team for Time Warner Cable, testing out the very first DOCSIS modems in the Kansas City area. Back then, it was faster for me to drive home, download patches (to an Iomega ZIP disk—remember those?) and drive back to work than to use the office network to do it.

My, how things have changed.

I'm still the geek with a home system that rivals those found in some small businesses, but that system serves a very important purpose. This home environment affords me the luxury to experiment, learn and play with systems that are similar enough to business environments to be useful, but not so critical that the occasional upset doesn't result in lost revenue.

Having seen others post information on how they built their labs, I thought I'd post mine, too. Not so much for bragging rights—although there's always a bit of that in IT—but to show yet another variant in the theme.

Compute: 2 x Dell PE2950 III with
  • 2 x Intel Xeon E5345 @ 2.33 GHz
  • 32 GB DDR2 (8 x 4GB DIMMs)
  • Dell PERC 6/i
  • Intel PRO/1000 PT Dual Port Server Adapter
  • DRAC5
  • 2 x 1TB 7200RPM (RAID1), 1 x 240GB SSD; 4 x 500GB 7200RPM (RAID1/0), 1 x 240GB SSD
  • Dual power supplies
  • 2 x APC BackUPS 1500
Network:
  • Cisco SG300-28
  • Netgear JGS524
  • Apple Airport Extreme
  • 2 x Apple Airport Express (1st Generation)
  • Apple Airport Express (2nd Generation)
  • Astaro ASG110
Storage:
  • iomega StorCenter ix2 "Cloud Edition", 2 x 1TB "stock" drives
  • 2 x iomega StorCenter px6-300d, 6 x 2TB HDS723020BLA642 (RAID5)
  • Synology DS2413+, 12 x ST2000DM001 (RAID5)

The current status is the result of years of slow change. I picked up the first 2950 at Surplus Exchange for $200, with no drives, 4GB RAM and stock network (dual-port Broadcom). I replaced the RAM and added the Intel NIC and PERC 6/i through my friends at Aventis Systems. The second 2950 was from eBay, originally priced as part of a bulk lot with a "buy it now" exceeding $1000; it was only one of several identical systems with 8GB RAM (8 x 1GB), PERC 5/i and no DRAC. I was able to negotiate with the seller to send it without the RAM or PERC for $100, then added the same (plus DRAC) to match the first. In the latter case, it was more important to match MB & CPU than anything else, and I made out like a bandit because of it.

At the time, I put the big drives in the hosts because I didn't have shared storage with sufficient performance to host the VMs I needed to run regularly; running them locally was the only option. Later, I was able to add shared storage (first, an iomega ix4-200d, which actually had enough "steam" to be reasonable with these systems), which I've been slowly updating to the current status.

The PE2x50 is a line of 2U rackmount servers. Rather than leaving them on a table (a pain if you ever need to get into them) or putting them in a rack (lots of floor space consumed), I hung them on the wall. Seriously. Startech.com sells a 2U vertical-mount rack that's intended for network gear or other light gear; I bolted them into the poured-concrete walls of my basement, and hung the servers from them. The front "tabs" of the server are sufficiently sturdy to support the weight, and it gives me full access to the "innards" of the machines without pulling them from the wall.
PE2950 on startech.com vertical rack brackets.
The Cisco switch doesn't run IOS (it doesn't run iOS, either, but that's a different joke) but it is a layer 2* managed switch that does VLAN, QoS, LAG/LACP and other fine functions you expect to find in the enterprise. And yes, I would prefer to have 2, but seem to do fine without as long as I don't need to upgrade the firmware.

This environment is a bit more than just a lab, however; labs have the idea of impermanence to them, while I have a number of systems that I never dump or destroy. This setup is permanent residence of a pair of domain controllers, a pair of file servers, an Exchange 2010 MBX/HT host, a remote desktop server, a multi-purpose web server (it does regular web, Exchange CAS & RDS Gateway), SQL Server and of course, vCenter. The remaining capacity gets used by "play" with other projects in the way one would normally use a lab: VMware View, Citrix XenApp/XenDesktop/XenMobile, vCOps, vCIN (infrastructure navigator).

And as of Monday (29-April-2013) it was upgraded to the latest/greatest version of vSphere: 5.1 U1