Ad-Blocking With PI-Hole
I had Pi-Hole running on a standalone Raspberry PI, but wanted to move this to my Kubernetes cluster. Digging around, I found a useful article on how add PI-Hole to Kubernetes, which not only talked about using PI-Hole, but having redundant instances with info on keeping them in-sync. It used MetalLB, ingress, and CertManager for Let’s Encrypt certifications – something I was interested in.
There was another article, based on using Helm and having some monitoring setup. I may try someday.
A Few Things
First, as expected, this article had an older version of pi-hole (2022.12.1). I tried the latest version (at this time 2024.05.0), but the pods were stuck in crash loops. What I found out, was that for liveness/readiness, the YAML specified to do an HTTP get at the root of the Lighttp web server. When using the 2023.02.1 pihole image it worked, but with 2023.02.2 it failed.
Trying curl http://127.0.0.1/ inside the pod showed a 403 Forbidden error. If I tried to access http://127.0.0.1/admin, I’d get a 301 Moved Permanently with a ‘/admin/’ path. If I did http://127.0.0.1/admin/, I’d get a 302 Found response with path ‘login.php’. When I did http://127.0.0.1/admin/login.php, I’d get a 200 OK result with content.
So, I changed the liveness and health probe configuration to add a path field with ‘/admin/login.php’ and then the pods would come up successfully.
Second, For the PI-Hole admin web pages, I chose to use a network type of LoadBalancer (instead of ClusterIP and then setting up an ingress IP). Accessing locally is fine, as I just use the IP assigned by the load balancer. The article talks about setting up a certificate using Let’s Encrypt to be able to access remotely.
I already have a domain name, and I’m using Dynamic DNS to redirect that domain to my router’s WAN IP. But, I’m currently port forwarding external HTTP/HTTPS traffic to my standalone Raspberry PI for a music server that uses Let’s Encrypt for certificates.
For now, I think I’ll just access my PI-Hole admin page locally. I will, however, have to figure out how to setup Let’s Encrypt, once I move my music server and other web apps to the Kubernetes cluster, so it will be useful to keep this info in mind.
Setting Up PI-Hole
I’m doing the same thing as the article, running three replicas of the PI-Hole pods, and I altered the liveness/readiness check. Here is my manifest.yaml in pieces:
apiVersion: v1
kind: Namespace
metadata:
name: pihole
---
apiVersion: v1
kind: ConfigMap
metadata:
name: pihole-configmap
namespace: pihole
data:
TZ: "America/New_York"
PIHOLE_DNS_: "208.67.220.220;208.67.222.222"
This sets up a namespace for PI-Hole, defines the timezone I'm using, and the upstream DNS servers that I wanted to use (OpenDNS). You can customize, as desired.
---
apiVersion: v1
kind: Secret
metadata:
name: pihole-password
namespace: pihole
type: Opaque
data:
WEBPASSWORD: <PUT_BASE64_PASSWORD_HERE> # Base64 encoded
This is the password that will be used when logging into the PI-Hole admin page. You should encode this using “echo -n ‘MY PASSWORD’ | base64” and place the encoded string in the WEBPASSWORD attribute.
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pihole
namespace: pihole
spec:
selector:
matchLabels:
app: pihole
serviceName: pihole
replicas: 3
template:
metadata:
labels:
app: pihole
spec:
containers:
- name: pihole
image: pihole/pihole:2024.05.0
envFrom:
- configMapRef:
name: pihole-configmap
- secretRef:
name: pihole-password
ports:
- name: svc-80-tcp-web
containerPort: 80
protocol: TCP
- name: svc-53-udp-dns
containerPort: 53
protocol: UDP
- name: svc-53-tcp-dns
containerPort: 53
protocol: TCP
livenessProbe:
httpGet:
port: svc-80-tcp-web
path: /admin/login.php
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
port: svc-80-tcp-web
path: /admin/login.php
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 10
volumeMounts:
- name: pihole-etc-pihole
mountPath: /etc/pihole
- name: pihole-etc-dnsmasq
mountPath: /etc/dnsmasq.d
volumeClaimTemplates:
- metadata:
name: pihole-etc-pihole
namespace: pihole
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 3Gi
- metadata:
name: pihole-etc-dnsmasq
namespace: pihole
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 3Gi
This is the stateful set that will create three replicas of the PI-Hole pods. I’m using the latest version at this time (2024.05.0), have modified the liveness/readiness checks as mentioned above, and am using PVs (longhorn) for storing configuration.
---
apiVersion: v1
kind: Service
metadata:
name: pihole
namespace: pihole
labels:
app: pihole
spec:
clusterIP: None
selector:
app: pihole
---
kind: Service
apiVersion: v1
metadata:
name: pihole-web-svc
namespace: pihole
spec:
selector:
app: pihole
statefulset.kubernetes.io/pod-name: pihole-0
type: LoadBalancer
ports:
- name: svc-80-tcp-web
port: 80
targetPort: 80
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: pihole-dns-udp-svc
namespace: pihole
annotations:
metallb.universe.tf/allow-shared-ip: "pihole"
spec:
selector:
app: pihole
type: LoadBalancer
ports:
- name: svc-53-udp-dns
port: 53
targetPort: 53
protocol: UDP
---
kind: Service
apiVersion: v1
metadata:
name: pihole-dns-tcp-svc
namespace: pihole
annotations:
metallb.universe.tf/allow-shared-ip: "pihole"
spec:
selector:
app: pihole
type: LoadBalancer
ports:
- name: svc-53-tcp-dns
port: 53
targetPort: 53
protocol: TCP
These are the services for the UI and for DNS. Of note, we are using the same laod balancer IP for the TCP and UDP DNS services. I used load balancer for the web UI as well (instead of using ClusterIP and setting up an ingress – maybe that will bite me later).
With this manifest, you can “kubectl apply -f manifest.yaml” and then look for all three of the pods to start up. You should be able to do nslookup/dig commands using the IP of the service as the server to verify that DNS is working, and you can use the IP for the pihole-web-svc service with a path of /admin/ (e.g. http://10.11.12.203/admin/). Use the password you defined in the manifest, to log in and see operation of the Ad Blocker.
Keeping The PI-Hole Pods In Sync
As mentioned in the article, we have three PI-Hole pods (one primary, two secondary), but need to keep the database in sync. To do this, Orbital Sync is used to backup the primary pod’s database, and then restore it to the secondary pods’ databases. Here is the orbital-sync.yaml manifest:
kind: ConfigMap
metadata:
name: orbital-sync-config
namespace: pihole
data:
PRIMARY_HOST_BASE_URL: “http://pihole-0.pihole.pihole.svc.cluster.local”
SECONDARY_HOST_1_BASE_URL: “http://pihole-1.pihole.pihole.svc.cluster.local”
SECONDARY_HOST_2_BASE_URL: “http://pihole-2.pihole.pihole.svc.cluster.local”
INTERVAL_MINUTES: “1”
—
apiVersion: apps/v1
kind: Deployment
metadata:
name: orbital-sync
namespace: pihole
spec:
selector:
matchLabels:
app: orbital-sync
template:
metadata:
labels:
app: orbital-sync
spec:
containers:
– name: orbital-sync
image: mattwebbio/orbital-sync:latest
envFrom:
– configMapRef:
name: orbital-sync-config
env:
– name: “PRIMARY_HOST_PASSWORD”
valueFrom:
secretKeyRef:
name: pihole-password
key: WEBPASSWORD
– name: “SECONDARY_HOST_1_PASSWORD”
valueFrom:
secretKeyRef:
name: pihole-password
key: WEBPASSWORD
– name: “SECONDARY_HOST_2_PASSWORD”
valueFrom:
secretKeyRef:
name: pihole-password
key: WEBPASSWORD
It runs every minute, and uses the secret that was created with the password to access PI-Hole. You can look at the orbital sync pod log to see that it is backing up and restoring the database among the PI-Holes.
Finishing Touches
Under the UI’s local DNS entries section, I manually entered the hostname (with a .home suffix) and IP address for each of my devices on the local network, so that I can access them by :”name.home”.
I did not setup DHCP on PI-Hole, as I used my router’s DHCP configuration.
To use the PI-Hole as the DNS server for all systems in your network, you can specify the IP of the PI-Hole on each host as the only DNS server. If you specify more than one DNS server, based on your OS, it may use the other server(s) at times and bypass the ad-blocking.
For me, I have all my hosts using the router as the primary DNS server. The router is configured to use the PI-Hole as the primary server, and then a public server as the secondary server. Normally, requests would always go to the Pi-Hole, unless for some reason it was down. This was advantageous for two reasons. First, when I had my standalone PI-Hole, if it crashed, there still was DNS resolution. Second, it made it easy to switch from the standalone PI-Hole to the Kubernetes one, by just changing the router configuration.
The only odd thing with this setup, is that when I use my laptop away from the network, my router’s IP is (obviously) not available. I’ve been getting around this, by using the “Location” feature of the MacOS, to setup the “Home” location to use my router’s IP for DNS, and to use a public DNS server for the “Roaming” location.
I guess I could setup so that the ports used for DNS on my domain name (which points to my router using Dynamic DNS), would port forward to the PI-Hole IP, but I didn’t want to expose that to the Internet.