Friday, February 28, 2020

Update: maintaining the pi-hole HA pair

In an earlier post, I shared how I got pi-hole working in my environment, thanks to a number of posts on a reddit thread. Since then, I've been living with the setup and tweaking my configuration a bit.

This post documents one of the tweaks that might be useful for others...

If you're using the method documented by Panja0, you know that there's a script in the pi-hole distribution (gravity.sh) that must be edited in order to synchronize files between the nodes of the HA pair. Well, he reminds you in the tutorial that it'll need to be re-edited every time you update pi-hole, or the synchronization won't occur.

As you might guess, I didn't remember when I updated a week ago, and couldn't understand why my settings weren't getting synchronized. So I went back to the post, reviewed my settings, and face-palmed myself when I discovered my oversight. I had failed to re-edit gravity.sh

After I did the necessary edits, I realized that, even if I'd remembered about it, I'd still need to refer to the original post to get the right command line, etc., for the edits.

I didn't want to spend the time to figure out how to trigger a script to make the update for me upon a pi-hole update, but I sure could figure out the script to do the correct updates!

I mean... come on: what better use of automation than to use a script to a) check to see if the update has already been performed, and b) if not, perform the update?

#!/bin/bash
# make sure the pihole-gemini script is being run by gravity.sh

GEMINI='su -c /usr/local/bin/pihole-gemini - <gemini user>'
GRAVITY=/opt/pihole/gravity.sh

TRIGGER=$(sed -e '$!{h;d;}' -e x $GRAVITY)
if [ "$TRIGGER" != "$GEMINI" ]
then
        # insert the gemini commandline before the last line of the script
        sed -i "$ i$GEMINI" $GRAVITY
fi

If you decide to use the script, just make sure that you make any necessary modifications for the first two script variables to match your installation. You also need it on both nodes of your HA pair!

In my setup, I'm saving this script in the /etc/scripts directory, which I'm using for other "keepalived" scripts. I'll remember to run it next time I update pi-hole, and that's all I'll need to recall!

Saturday, February 1, 2020

Putting Pi-hole to work

I've been reading about my friends' use of Pi-hole on their home networks, and I've been curious about trying it to see how well it does. I've resisted doing so, primarily because of the single point of failure a pi-hole system represents: if it's unavailable, you get no DNS.

And we all know, it's never DNS...except when it is.

An alternative, naturally, it to run a pair of systems. Why not? Raspberry Pi devices are relatively cheap, and the software is all no-charge.

For most home users, that might be fine, but I run a lab in my home that also provides services to the household, so I had more permutations to worry about: what happens if my Pi dies? what happens if my domain controllers are unavailable? Etc.

The solution I've settled on is to run a primary Pi-hole server as a VM in my lab environment—which gives me more than enough performance and responsiveness, even under the most demanding of situations—and a secondary with a Raspberry Pi, so that even if the VM environment goes "pear shaped," I still get DNS resolution.

In order to accommodate several types of outages, yet avoiding the need to both double-up the configuration work (with the potential of missing an update and having weird results to troubleshoot) while providing pre-configured support for a couple of likely failure and maintenance scenarios, I've mated the two systems together in a failover cluster by configuring the "keepalive" daemon along with some scripting to keep the two systems in sync for the blocking function, while leaving some configuration elements (upstream DNS servers for one) independent of each other.

I didn't do the "heavy lifting" on the sync and keepalive aspects; those were provided by reddit user Panja0 in this post: https://www.reddit.com/r/pihole/comments/d5056q/tutorial_v2_how_to_run_2_pihole_servers_in_ha/

I'm running ubuntu server 19.10 (Eoan Ermine... whatever) instead of Raspbian Stretch/Buster, so there have been a number of changes I've had to make to the systems to adapt:

  • To get keepalived installed, I needed libipset11, not libipset3 (mentioned in the comments of the HA tutorial)
  • I had to modify the rsync command arguments in the synchronization script due to changes between Debian versions that I'm running versus the original post (mentioned in the comments of the HA tutorial)
  • I had to permit my rsync user to skip password re-auth by editing the sudoers file; I think this may also be a version-specific issue.
  • I added an NTP client to utilize my GPS-based hardware time server; this is super important when using a Raspberry Pi without a real-time clock HAT add-on.
  • The primary system uses the lab's DNS (domain controllers) for its upstream DNS servers. In addition to avoiding the need to configure additional conditional forwarding rules for dnsmasq, this gives the Pi-hole server the identity of the clients via DNS
  • The secondary uses OpenDNS servers—I have a household account with several filtering options enabled already—with a dnsmasq configuration for conditional forwarding on the domain.
Given my homelab, it was pretty trivial to set this up as a VM, but what really sold it to me was getting the Raspberry Pi running in concert. I originally started with a Pi 3 Model B that I had lying around after an old project that I'd quit, but the performance difference between the two platforms was so noticeable that going with a true primary/secondary setup made the most sense. I considered upgrading to the Pi 4, but decided that my desire to avoid purchasing micro-HDMI adapters outweighed the value in the more-robust, newer model. I did decide to go ahead and upgrade from the 3 to the 3+, however, when I discovered that my local MicroCenter had them for $34US; I also paired the new unit with a passive heatsink case, which has allowed the Pi to run significantly cooler (30°F) than the original setup, which utilized aluminium heatsinks and a non-vented plastic case.

Aside from this "vanilla" setup, I also took note of the additional block lists that my friend Tim Smith wrote about in a blog post. I need to let this "bake" for a while before considering it finished, but I'm liking what I'm seeing so far.

Thursday, September 19, 2019

New VM cleanup

When creating a new VM in vSphere, you get a number of virtual devices & settings by default that you probably don't have any interest in keeping:

  • Floppy drive (depending on version & type of client in use)
  • Floppy adapter
  • IDE ports
  • Serial ports
  • Parallel ports
Given that some of these are redundant (why keep the IDE adapter when you're using SATA for the optical device?) while others are polled I/O in Windows (OS must keep checking to see if there's activity on the port, even if there will never be any), it just makes things more streamlined if you cleanup these settings when creating a new VM...then using the cleaned-up VM as a template for creating new VMs later on.

Step 1: create a new VM
Step 2: Set VM name and select a location
Step 3: Select a compute resource
Step 4: Select storage
Step 5: Set compatibility no higher than your oldest version of ESXi that the template could be deployed on.
Step 6: Select the guest OS you'll install
Step 7a: Customize hardware: CPU, Memory, Hard Drive
Step 7b: Attach NIC to a general-purpose or remediation network port
Step 7c: Don't forget to change the NIC type! If you don't the only way to change it later is to remove & re-add the correct type, which will also change the MAC address and, depending on the order you do the modifications, could put the new virtual NIC into a different virtual PCIe slot on the VM hardware, upsetting other configurations in the guest (like static IP addresses).
Step 7d: Jump to the Options tab and set "Force BIOS setup"
Step 8: Finish creating the VM
Step 9: Open remote console for VM
Step 10: Power On the VM. IT should pause at the BIOS editor screen.
Step 11: On the Advanced page, set Local Bus IDE to "Disabled" if using SATA; set it to "Secondary" if using IDE CD-ROM (Even better: Change the CD-ROM device to IDE 0:0 and set it to "Primary").
Step 12: Descend into the "I/O Device Configuration" sub-page; by default, it'll look like the screenshot below:
Step 13: Using the arrow keys & space bar, set each device to "Disabled", then [Esc] to return to the Advanced menu.
Step 14: Switch to the Boot page. By default, removable devices are first in the boot order.
Step 15: Use the minus [-] key to lower the priority of removable devices. This won't hurt the initial OS setup, even on setup ISOs that normally require a key-press to boot off optical/ISO media: the new VM's hard drive has no partition table or MBR, so it'll be skipped as a boot device even when it's first. Once the OS is installed, you'll never have to worry about a removable media causing a reboot to stall.
Step 16: Press [F10] to save the BIOS config, then use the console to attach to an ISO (local or on a datastore) before exiting the BIOS setup page.


Step 17: Install the guest OS, then add VMware Tools. Perform any additional customization—e.g., patching, updates, and generalization—then convert the new VM to a template.

You're set! No more useless devices in your guest that take cycles from the OS or hypervisor.

Additional Note on modifying existing VMs:
Aside from the need to power down existing VMs that you might want to clean up with this same procedure, the only issue I've run into after doing the device + BIOS cleanup is making sure I get the right combination of IDE channels & IDE CD-ROM attachment. The number of times I've set "Primary" in BIOS but forgot to change the CD-ROM to IDE 0:0 is ... significant.

Additional Note on Floppy Drives:
Floppy drive handling is a special case, and will very much depend on which version of vSphere—and therefore, the management client—you're using. If you have the "Flex" client (or are still using v6.0 and have the C# client), the new VM will have a floppy disk device added by default. Naturally, you want to remove it as part of your Hardware Customization step during new VM deployment.
If you're happily using the HTML5 Web Client, you may find that the floppy is neither present, nor manageable (for adding/removing or attaching media)... This is the 0.1% of feature parity that I still find lacking in the H5 client. Hopefully, it'll get added, if for no better reason than to allow an admin to remove floppy devices that are still part of VMs that were created in older versions.