And we all know, it's never DNS...except when it is.
An alternative, naturally, it to run a pair of systems. Why not? Raspberry Pi devices are relatively cheap, and the software is all no-charge.
For most home users, that might be fine, but I run a lab in my home that also provides services to the household, so I had more permutations to worry about: what happens if my Pi dies? what happens if my domain controllers are unavailable? Etc.
The solution I've settled on is to run a primary Pi-hole server as a VM in my lab environment—which gives me more than enough performance and responsiveness, even under the most demanding of situations—and a secondary with a Raspberry Pi, so that even if the VM environment goes "pear shaped," I still get DNS resolution.
In order to accommodate several types of outages, yet avoiding the need to both double-up the configuration work (with the potential of missing an update and having weird results to troubleshoot) while providing pre-configured support for a couple of likely failure and maintenance scenarios, I've mated the two systems together in a failover cluster by configuring the "keepalive" daemon along with some scripting to keep the two systems in sync for the blocking function, while leaving some configuration elements (upstream DNS servers for one) independent of each other.
I didn't do the "heavy lifting" on the sync and keepalive aspects; those were provided by reddit user Panja0 in this post: https://www.reddit.com/r/pihole/comments/d5056q/tutorial_v2_how_to_run_2_pihole_servers_in_ha/
I'm running ubuntu server 19.10 (Eoan Ermine... whatever) instead of Raspbian Stretch/Buster, so there have been a number of changes I've had to make to the systems to adapt:
- To get keepalived installed, I needed libipset11, not libipset3 (mentioned in the comments of the HA tutorial)
- I had to modify the rsync command arguments in the synchronization script due to changes between Debian versions that I'm running versus the original post (mentioned in the comments of the HA tutorial)
- I had to permit my rsync user to skip password re-auth by editing the sudoers file; I think this may also be a version-specific issue.
- I added an NTP client to utilize my GPS-based hardware time server; this is super important when using a Raspberry Pi without a real-time clock HAT add-on.
- The primary system uses the lab's DNS (domain controllers) for its upstream DNS servers. In addition to avoiding the need to configure additional conditional forwarding rules for dnsmasq, this gives the Pi-hole server the identity of the clients via DNS
- The secondary uses OpenDNS servers—I have a household account with several filtering options enabled already—with a dnsmasq configuration for conditional forwarding on the domain.
Given my homelab, it was pretty trivial to set this up as a VM, but what really sold it to me was getting the Raspberry Pi running in concert. I originally started with a Pi 3 Model B that I had lying around after an old project that I'd quit, but the performance difference between the two platforms was so noticeable that going with a true primary/secondary setup made the most sense. I considered upgrading to the Pi 4, but decided that my desire to avoid purchasing micro-HDMI adapters outweighed the value in the more-robust, newer model. I did decide to go ahead and upgrade from the 3 to the 3+, however, when I discovered that my local MicroCenter had them for $34US; I also paired the new unit with a passive heatsink case, which has allowed the Pi to run significantly cooler (30°F) than the original setup, which utilized aluminium heatsinks and a non-vented plastic case.