Sunday, September 15, 2013

Revisiting the Home Lab

Although my home lab has performed flawlessly (aside from the occasional lost hard drive in the NAS boxes), the cornerstone equipment—two Dell PowerEdge 2950 hosts—were a little dated when acquired and have certainly shown their age as post vSphere 5.0 releases (and their accessory services & features) come out.

In short, they weren't cutting it anymore: not enough RAM, physical NICs that don't support NIOC, limited cores, limited CPU/RAM features for the hypervisor, etc.

But, having "saved my pennies" while working with the 2950s, I have treated myself to new kit.

I could have gone down the path of "Baby Dragon"—and had I more free time to do system builds and support my frankenservers—but instead chose recertified Dell PowerEdge R610s from Stallard in my home town of Kansas City, getting another set of enterprise-class servers with a 1-year warranty from STI to boot.

Of several criteria, I knew I wanted a recent generation of hardware with 96 to 128GB of RAM per host. Rackmount would be a good replacement for the existing hosts, as long as the "ears" on the new hosts would work with my crazy vertical setup. PCIe was a given, but with the enormous capacity available among 3 NAS boxes, local storage wasn't a consideration. Finally, I wanted a system that had an inboard SD card reader so I could boot via Flash.

There are many, many options to fit the bill: I watched eBay and Craig's List, checked with my company's NFR purchase options, and several recommended resellers of reconditioned gear. With STI in the area and the option to take direct delivery (no shipping!!), there just wasn't anything else that fit the bill.

I took delivery of the new servers on Friday, 13-Sept, and set about swapping new for old. After putting the first host into maintenance mode and removing it from storage & DVS, I dismounted it and swapped the 2-port Intel server NIC into the new server.

I then mounted the new server after dropping an 8GB "Class 10" SD card into the internal reader, and booted to the vSphere install CD.

For those of you who have done this, you know how quickly ESXi installs; a little additional configuration for storage and vmkernel nics and I was ready to perform my first vMotion to the new host (what I would normally consider a definitive test of a cluster setup). A final step to run Update Manager against the host to get it fully patched, and I was "In Production" on the first host.

I repeated the steps with my second old/new pair, and even with interruptions around the house, was done with the swap in a handful of hours. (Try THAT with a non-virtualized system!)

New lab specs:

  • Dell PowerEdge R610
  • 96GB RAM (12x8GB)
  • 2x Intel E5540 (Quad-core, 2.53Ghz, Hyperthreading enabled)
  • Dell SAS 6/iR; No internal storage
  • Boot from 8GB SD Card
  • 6x 1Gbps (4x Broadcom 5709, 2x Intel 82571EB)
  • Redundant power
  • iDRAC Enterprise for "headless" operation
Closing Notes:
By choosing to go with Enterprise class equipment, both now an previously, I also have a charitable organization that is quite interested in taking my old 2950s; while I may have out-grown them, these servers would be the first "true" servers they'd ever had. Additionally, the servers remain on the HCL for VMware, so it's quite likely I'll be able to get them to put in VMware rather than trying to do physical server installs on them.

Wednesday, June 19, 2013

Shrinking the Server 2012 VM by managing the WinSxS repository

With the release of Server 2012, administrators have more control over the data being held in the local %systemroot%\WinSxS (Windows Side-By-Side) folder through the use of PowerShell cmdlets.

The previously-supported Uninstall-WindowsFeature cmdlet has been enhanced with a new argument: -Remove.

When used, the cmdlet will not only uninstall the feature (if installed), it will remove the installer code from the WinSxS. Additionally, a feature that's not installed—but still available in the SxS folder—can be removed as well.

This is particularly valuable when a server VM is fully-deployed and you don't need any additional features; simply run the following cmdlet to remove all that extra cruft:

Get-WindowsFeature | where {$_.InstallState -Eq "Available"} | Uninstall-WindowsFeature -Remove

But what if you need one of those removed features back? There are several mechanisms available; the most transparent one is to use the Add-WindowsFeature cmdlet while connected to the Internet (or with network access to the local Windows Server Update Services host defined in the domain policy). In this use case, the system will retrieve a network copy of the feature and install it.

It might be more efficient, however, to use a readily-available ISO; in that case, you mount the ISO file to the VM and use the -source argument to specify the image for installing the feature:

Add-WindowsFeature $feature -Source:WIM:D:\sources\install.wim:1

There is a bit of a trick in there, too: what's that index number at the end of the source specification? The WIM (Windows IMage) file can contain multiple images; you specify the appropriate image index for the OS edition you're managing. How do you know which index to choose? Use the dism command:

dism /Get-WimInfo /WimFile:D:\Sources\install.wim

Personally, I'm going with a thin-and-trim template for my Server 2012 VMs:
  1. Install the Server Core version
  2. Add the "Minimal GUI" management interface
    Install-WindowsFeature Server-Gui-Mgmt-Infra
  3. Remove all the available features (above)
  4. Create a custom unattended sysprep configuration file
With this as a base template, I can easily add needed features (including the full server GUI) from a datastore-based ISO, always accessible to the VM.

Thursday, May 23, 2013

Wednesday, May 1, 2013

The "home lab"

Back in 1998, I took an old tower system and put a copy of NetWare 5 on it, then connected a modem and shoved it under a desk. I connected it to two other machines using discarded 10Base-2 network adapters,  RG-59 cable and a couple of terminating resistors. That version was able to do on-demand dialup to an ISP, could share that connection with connected clients, and when you stuck with IPX for inter-machine communication, was immune from most worms or viruses that spread by NetBIOS shares.

No, that wasn't a work or customer environment, it was my home network.

It wasn't too much longer that I joined the beta team for Time Warner Cable, testing out the very first DOCSIS modems in the Kansas City area. Back then, it was faster for me to drive home, download patches (to an Iomega ZIP disk—remember those?) and drive back to work than to use the office network to do it.

My, how things have changed.

I'm still the geek with a home system that rivals those found in some small businesses, but that system serves a very important purpose. This home environment affords me the luxury to experiment, learn and play with systems that are similar enough to business environments to be useful, but not so critical that the occasional upset doesn't result in lost revenue.

Having seen others post information on how they built their labs, I thought I'd post mine, too. Not so much for bragging rights—although there's always a bit of that in IT—but to show yet another variant in the theme.

Compute: 2 x Dell PE2950 III with
  • 2 x Intel Xeon E5345 @ 2.33 GHz
  • 32 GB DDR2 (8 x 4GB DIMMs)
  • Dell PERC 6/i
  • Intel PRO/1000 PT Dual Port Server Adapter
  • DRAC5
  • 2 x 1TB 7200RPM (RAID1), 1 x 240GB SSD; 4 x 500GB 7200RPM (RAID1/0), 1 x 240GB SSD
  • Dual power supplies
  • 2 x APC BackUPS 1500
  • Cisco SG300-28
  • Netgear JGS524
  • Apple Airport Extreme
  • 2 x Apple Airport Express (1st Generation)
  • Apple Airport Express (2nd Generation)
  • Astaro ASG110
  • iomega StorCenter ix2 "Cloud Edition", 2 x 1TB "stock" drives
  • 2 x iomega StorCenter px6-300d, 6 x 2TB HDS723020BLA642 (RAID5)
  • Synology DS2413+, 12 x ST2000DM001 (RAID5)

The current status is the result of years of slow change. I picked up the first 2950 at Surplus Exchange for $200, with no drives, 4GB RAM and stock network (dual-port Broadcom). I replaced the RAM and added the Intel NIC and PERC 6/i through my friends at Aventis Systems. The second 2950 was from eBay, originally priced as part of a bulk lot with a "buy it now" exceeding $1000; it was only one of several identical systems with 8GB RAM (8 x 1GB), PERC 5/i and no DRAC. I was able to negotiate with the seller to send it without the RAM or PERC for $100, then added the same (plus DRAC) to match the first. In the latter case, it was more important to match MB & CPU than anything else, and I made out like a bandit because of it.

At the time, I put the big drives in the hosts because I didn't have shared storage with sufficient performance to host the VMs I needed to run regularly; running them locally was the only option. Later, I was able to add shared storage (first, an iomega ix4-200d, which actually had enough "steam" to be reasonable with these systems), which I've been slowly updating to the current status.

The PE2x50 is a line of 2U rackmount servers. Rather than leaving them on a table (a pain if you ever need to get into them) or putting them in a rack (lots of floor space consumed), I hung them on the wall. Seriously. sells a 2U vertical-mount rack that's intended for network gear or other light gear; I bolted them into the poured-concrete walls of my basement, and hung the servers from them. The front "tabs" of the server are sufficiently sturdy to support the weight, and it gives me full access to the "innards" of the machines without pulling them from the wall.
PE2950 on vertical rack brackets.
The Cisco switch doesn't run IOS (it doesn't run iOS, either, but that's a different joke) but it is a layer 2* managed switch that does VLAN, QoS, LAG/LACP and other fine functions you expect to find in the enterprise. And yes, I would prefer to have 2, but seem to do fine without as long as I don't need to upgrade the firmware.

This environment is a bit more than just a lab, however; labs have the idea of impermanence to them, while I have a number of systems that I never dump or destroy. This setup is permanent residence of a pair of domain controllers, a pair of file servers, an Exchange 2010 MBX/HT host, a remote desktop server, a multi-purpose web server (it does regular web, Exchange CAS & RDS Gateway), SQL Server and of course, vCenter. The remaining capacity gets used by "play" with other projects in the way one would normally use a lab: VMware View, Citrix XenApp/XenDesktop/XenMobile, vCOps, vCIN (infrastructure navigator).

And as of Monday (29-April-2013) it was upgraded to the latest/greatest version of vSphere: 5.1 U1

Tuesday, April 23, 2013

Moving the vSphere 5.1 SSO database

Plenty of resources for moving MS SQL Server-hosted vCenter and Update Manager databases. But what about the database for the new Single Sign-On service?

Easy, as long as you get the SQL users moved and change the hostname string in two places.

The easy part is getting the users moved. There's a handy Microsoft KB article for transferring logins from one server to another. I've never had a problem with that.

The harder part is getting the SSO "bits" to accept a new hostname. Thankfully, Gabrie van Zanten was able to document this, along with some other pieces related to SSO database management.

So here's your steps:
  1. Execute the sp_help_revlogin stored procedure on the existing SQL server to get the RSA_USER and RSA_DBA logons.
  2. Merge the create user lines with the script from the vCenter SSO Install source. This makes certain you have all the necessary attributes for these users.
  3. Shut down the SSO service.
  4. Backup the current RSA database.
  5. Restore the backup on the new server.
  6. Execute the user creation lines from Step 2.
  7. In a command shell, go to the SSO Server's utils folder (in a default install, the path is C:\Program Files\VMware\Infrastructure\SSOServer\utils) and use the rsautil script to modify the database location:
    rsautil configure-riat -a configure-db --database-host hostname
  8. Verify your changes by inspecting .\SSOServer\webapps\ims\WEB-INF\classes\
  9. Update the field in the .\SSOServer\webapps\lookupservice\WEB-INF\classes\ file.
  10. Restart the SSO service.

Thursday, March 14, 2013

Windows Sysprep and VM creation

I've seen a ton of blog posts, reference documents and white papers all instructing you—the virtualization admin—to build "template" VMs in the following fashion:

  1. Create a VM and install the OS
  2. Install standard applications
  3. Set all your configuration settings
  4. Patch & Update the OS and applications
  5. Sysprep
  6. Convert to Template
I'm here to tell you now: stop doing Step 5. Don't sysprep those golden images. At least, don't do it in your template process.

At the very least, using this model means you won't be able to update that template more than 3 times: doing a standard sysprep—without a custom unattended settings file—will "rearm" the software activation only so many times. If you run out of "rearms" you get the joy of rebuilding your golden image.

There is a way around the sysprep limit—see the SkipRearm post for my method—but that still leaves you with a VM template that's going to roll through the remainder of the Sysprep process the first time you turn it on—which you'll be doing every time you want to patch or update the image.

Instead, make Sysprep part of your new VM creation process. With VMware, you can easily convert a VM  back-and-forth from a template to a VM; in fact, for the longest time, I never even converted VMs to templates because there didn't seem to be much value in them: everything you could do to a template, you could do to a VM, while there are things you can do with a VM that you can't do to a template.

Instead, leave your golden image at Step 4; you will be revisiting it every month anyway, right?

Every time you need to spin up a VM from that point forward, you will have a (relatively) recently-patched starting point. In fact, if you're really efficient, you'll run the template VM before creating a new machine from it and patch that machine. Either way, you'll be patching a VM; but if you need to spin up more than one VM, the patching is already complete!

So here's my process:
A) Create your golden image
B) Update your golden image
C) Clone a new VM from the golden image
D) Run Sysprep (with or without SkipRearm and other unattended settings)
E) Repeat steps C-D as needed
F) Repeat step B as needed

Note: I realize there are certain setups that require you to leave a template/golden image at the post-Sysprep shutdown state. In those cases, just make sure you've got a snapshot prior to Sysprep so you can revert to a state before it was ever run.

Sunday, February 10, 2013

It's not a watch, it's a Pebble

After being prompted by a tweet from Chris Grossmeier (@cgrossmeier) to check out a Kickstarter project he decided to back, I joined him in the ranks of backers for the single most successful project in Kickstarter history. Originally requesting $100,000 to build a modest little "smart watch," Pebble Technology founder Eric Migicovsky found his project with over $10 million in backing before "selling out."

With that sort of support, Migicovsky revised the scope and breadth of the project, including additional features for the device and plans to retail the watch to non-backers. After many delays—not surprising with Kickstarter projects, but wholly appropriate for the new scope and scale of this one—a Pebble was delivered to my eager hands.
The friendly box design
Inside the spartan box: Pebble watch & its USB power cord
Kickstarter Edition
When first "firing up" the watch, it simply prompts you to pair it with a supported smartphone; in my case, I'd already downloaded the Pebble app from the Apple App Store and was ready to get going.

iOS App
First impressions are everything. It took very little effort to accomplish the Bluetooth pairing, and a software update for the watch was already available for transfer: it shipped with v.1.5.3 and was updated to v1.7.1. With the hints from the iOS app, I was also able to get some of the interactive functions going between watch and phone; it's also the conduit for loading additional watch faces.

Status and tipsApp & Watchface Loading
At this time, the SDK isn't publicly available, but a watch face design tool and app creator SDK are in the works. The watch comes with three "hard coded" watch faces, and five more are available in the iOS app. The built-in watch faces can't be deleted, and there's no function for hiding or reordering the menu: new faces always appear below the lowest permanent menu item (Settings).

Built-in Watch Face OptionsAdditional Menu OptionsDefault Watch Face
Strangely enough, while the Pebble has a configuration option for setting whether it's a 12- or 24-hour clock by default, one of the original, optional watch faces ("Big Time") was purpose-built to ignore the setting. Since my original inquiries about the behavior, the Pebble team has replaced the original design with a pair of watch faces—Big Time 12 and Big Time 24—to accommodate user desires rather than updating the single face to honor the system setting. This makes me wonder a bit about how sophisticated the API for custom watchfaces is going to be...

WatchfacesTwo faces instead of one
The Pebble is a work in progress: there are some gyrations that one must complete to get notifications for Mail and non-cell applications going (SMS and Call notifications work as soon as pairing is complete) for iPhone, and there are plenty of bugs being discussed on the Pebble forums. Luckily, the guys behind the project "get it," and have been serious about keeping backers updated.

Text Alert on phone
With "project update #32," they went through a laundry list of known issues. Although I'm personally experiencing some problems with my Pebble, it was heartening to see all those issues identified as "known problems" for my Pebble/Phone combination.

From a cosmetic standpoint, I've found that wearing the Pebble on the inside of my wrist is most comfortable; I've found other watches to work better that way, too, but there's the real potential for badly-scratching the watch face.
Watch "rolls away" on back of wrist.Inside wrist, face stays in a good place.
The backlight is understated enough that it won't cause comments from others at the movie theater, but plenty bright to make the watch readable in a dark(ened) room. It comes on when pressing buttons as one would expect; it will also come on with the flick of the wrist, a cool feature now that the watch contains an accelerometer (not in the original scope).

Overall, I'm satisfied with the Pebble, and am looking forward to the improvements in the functionality as time goes on.

Wednesday, February 6, 2013

Re-engineering vCenter: a proposal

After fighting my own instances of SSO and vCenter in the vSphere 5.1 management suite, seeing posts from others that have run into the same issues or other new and interesting ones, and generally counseling people to hold off on upgrading to 5.1 because of vCenter issues rather than hypervisor issues, it struck me that I've not seen very many suggestions on how or what to fix.

I'm just as guilty: It's far easier to complain and expect someone else to fix the problem than to wade in provide solutions.

So I did a bit of thinking, and have a set of ideas for re-engineering vCenter to overcome perceived faults.

At any rate, here we go...

Solution 1: SSO as a "blackbox" appliance.

Single sign-on has probably received the worst press of all the new vCenter bits in vSphere 5.1. By divesting this particular piece of all its Windows- and SQL-compatible nature and being distributed as an appliance, the vCenter team could also focus on adding features that allow the single appliance to be scaled (or at least made highly-available as an intrinsic feature).
Problems solved:

  • Native code. By standardizing on a single appliance OS, the development team could shelve the low-performing Java code—who's only redeeming value is the ready portability to run on both Windows and Linux platforms—and write using native code and eschew the interpreted languages. This should have the added bonus of being far more "tunable" for memory utilization, resulting in a svelte little appliance instead of a multi-gigabyte monster.
  • Integral clustering & load balancing. By adding integrated clustering and shared virtual server technology, the addition of a second appliance immediately eliminates SSO as a single point of failure in the vCenter suite. While the current implementation has a degree of support for adding high availability to this most-crucial of services, the lack of official support for many clustering or high-availability technologies for dependencies (eg, database access, client load balancing) is embarrassing.
  • Distributed database. By discarding the use of ODBC-connected databases and falling back on an open-source distributed database (with high levels of data integrity), the appliance can rely on internal database replication & redundancy rather than depending on some other system(s). Single appliances for small implementations are no longer dependent on external systems; multi-node clusters become interwoven, allowing scale-out without any other dependencies, yet behave transparently to systems that rely upon it.

Solution 2: If you're going "blackbox" with SSO, why not vCenter Server, too?

Yes, the vCenter Server Appliance (or VCSA) exists, but in its current iteration, it's limited compared to the Windows Application. Worse, because of a presumed desire to share as much code between the application and the appliance, a large portion—would it be fair to say essentially all of it?—of the server is written in Java. I don't know about you, but while that might serve the goal of making portable code, it certainly isn't what I'd want to use for a performance piece. So the same thing goes here as with SSO:
  • Native code.
  • Integral clustering (say goodbye to vCenter Heartbeat as an independent product)
  • Distributed database (Say goodbye to those MS or Oracle licenses!)

Solution 3: Integrated appliance

If you're going to have SSO and vCenter with the same sort of "black box" packaging, why not combine everything (SSO, Inventory, vCenter, Client, etc.) into a single appliance? We have a degree of that with the VCSA, but without the additional "packaging" as suggested above as well as needing feature-parity with the Windows app. Update Manager should be included, and View Composer could be just another 'click to enable' service that's activated with a license key: when configuring the VCSA, the admin should have the ability to enable arbitrary services, and if the same service is configured on multiple instances of the VCSA, the admin should have the option of enabling that service to run as a member of a cluster instead of having an independent configuration.
Stop with the individual appliances for every little management function: include all of them as a service in every build of VCSA!

No Silver Bullet

These suggestions are neither the "silver bullet" for the current perceived failings in vCenter, and I'm sure my peers can come up with dozens of reasons why these ideas won't work—not to mention the difficulty in actually producing them in code.
If nothing else, however, I hope is sparks thought in others. Maybe some discussion into how things can be improved, rather than simple complaints of "would'a, could'a, should'a" can come from it.