Friday, February 28, 2020

Update: maintaining the pi-hole HA pair

In an earlier post, I shared how I got pi-hole working in my environment, thanks to a number of posts on a reddit thread. Since then, I've been living with the setup and tweaking my configuration a bit.

This post documents one of the tweaks that might be useful for others...

If you're using the method documented by Panja0, you know that there's a script in the pi-hole distribution (gravity.sh) that must be edited in order to synchronize files between the nodes of the HA pair. Well, he reminds you in the tutorial that it'll need to be re-edited every time you update pi-hole, or the synchronization won't occur.

As you might guess, I didn't remember when I updated a week ago, and couldn't understand why my settings weren't getting synchronized. So I went back to the post, reviewed my settings, and face-palmed myself when I discovered my oversight. I had failed to re-edit gravity.sh

After I did the necessary edits, I realized that, even if I'd remembered about it, I'd still need to refer to the original post to get the right command line, etc., for the edits.

I didn't want to spend the time to figure out how to trigger a script to make the update for me upon a pi-hole update, but I sure could figure out the script to do the correct updates!

I mean... come on: what better use of automation than to use a script to a) check to see if the update has already been performed, and b) if not, perform the update?

#!/bin/bash
# make sure the pihole-gemini script is being run by gravity.sh

GEMINI='su -c /usr/local/bin/pihole-gemini - <gemini user>'
GRAVITY=/opt/pihole/gravity.sh

TRIGGER=$(sed -e '$!{h;d;}' -e x $GRAVITY)
if [ "$TRIGGER" != "$GEMINI" ]
then
        # insert the gemini commandline before the last line of the script
        sed -i "$ i$GEMINI" $GRAVITY
fi

If you decide to use the script, just make sure that you make any necessary modifications for the first two script variables to match your installation. You also need it on both nodes of your HA pair!

In my setup, I'm saving this script in the /etc/scripts directory, which I'm using for other "keepalived" scripts. I'll remember to run it next time I update pi-hole, and that's all I'll need to recall!

Saturday, February 1, 2020

Putting Pi-hole to work

I've been reading about my friends' use of Pi-hole on their home networks, and I've been curious about trying it to see how well it does. I've resisted doing so, primarily because of the single point of failure a pi-hole system represents: if it's unavailable, you get no DNS.

And we all know, it's never DNS...except when it is.

An alternative, naturally, it to run a pair of systems. Why not? Raspberry Pi devices are relatively cheap, and the software is all no-charge.

For most home users, that might be fine, but I run a lab in my home that also provides services to the household, so I had more permutations to worry about: what happens if my Pi dies? what happens if my domain controllers are unavailable? Etc.

The solution I've settled on is to run a primary Pi-hole server as a VM in my lab environment—which gives me more than enough performance and responsiveness, even under the most demanding of situations—and a secondary with a Raspberry Pi, so that even if the VM environment goes "pear shaped," I still get DNS resolution.

In order to accommodate several types of outages, yet avoiding the need to both double-up the configuration work (with the potential of missing an update and having weird results to troubleshoot) while providing pre-configured support for a couple of likely failure and maintenance scenarios, I've mated the two systems together in a failover cluster by configuring the "keepalive" daemon along with some scripting to keep the two systems in sync for the blocking function, while leaving some configuration elements (upstream DNS servers for one) independent of each other.

I didn't do the "heavy lifting" on the sync and keepalive aspects; those were provided by reddit user Panja0 in this post: https://www.reddit.com/r/pihole/comments/d5056q/tutorial_v2_how_to_run_2_pihole_servers_in_ha/

I'm running ubuntu server 19.10 (Eoan Ermine... whatever) instead of Raspbian Stretch/Buster, so there have been a number of changes I've had to make to the systems to adapt:

  • To get keepalived installed, I needed libipset11, not libipset3 (mentioned in the comments of the HA tutorial)
  • I had to modify the rsync command arguments in the synchronization script due to changes between Debian versions that I'm running versus the original post (mentioned in the comments of the HA tutorial)
  • I had to permit my rsync user to skip password re-auth by editing the sudoers file; I think this may also be a version-specific issue.
  • I added an NTP client to utilize my GPS-based hardware time server; this is super important when using a Raspberry Pi without a real-time clock HAT add-on.
  • The primary system uses the lab's DNS (domain controllers) for its upstream DNS servers. In addition to avoiding the need to configure additional conditional forwarding rules for dnsmasq, this gives the Pi-hole server the identity of the clients via DNS
  • The secondary uses OpenDNS servers—I have a household account with several filtering options enabled already—with a dnsmasq configuration for conditional forwarding on the domain.
Given my homelab, it was pretty trivial to set this up as a VM, but what really sold it to me was getting the Raspberry Pi running in concert. I originally started with a Pi 3 Model B that I had lying around after an old project that I'd quit, but the performance difference between the two platforms was so noticeable that going with a true primary/secondary setup made the most sense. I considered upgrading to the Pi 4, but decided that my desire to avoid purchasing micro-HDMI adapters outweighed the value in the more-robust, newer model. I did decide to go ahead and upgrade from the 3 to the 3+, however, when I discovered that my local MicroCenter had them for $34US; I also paired the new unit with a passive heatsink case, which has allowed the Pi to run significantly cooler (30°F) than the original setup, which utilized aluminium heatsinks and a non-vented plastic case.

Aside from this "vanilla" setup, I also took note of the additional block lists that my friend Tim Smith wrote about in a blog post. I need to let this "bake" for a while before considering it finished, but I'm liking what I'm seeing so far.