Media Server In Kubernetes
Another one of my apps running on a standalone Raspberry PI 4 (in a docker container), is the Emby media server. I ripped all my CDs to FLAC files and have been serving them up with Emby, so that I can play on my laptop, phone, Sonos speaker, and other DLNA devices. All the music is on a NAS box and I had it mounted on the Raspberry PI.
Now that I have a Kubernetes cluster of PIs, I wanted move the Emby server, and this looked like a good exercise on how to take a Docker container and run it on Kubernetes. There were some challenges, which made this harder than expected. Let’s go through the process though…
Migrating the Docker Container
After searching around, I found the a common way to migrate from Docker to Kubernetes was to use Kompose. I had this Dockerfile for Emby:
version: "2.3"
services:
emby:
image: emby/embyserver_arm64v8:latest
container_name: emby
environment:
- PUID=1000
- PGID=1003
- TZ=America/New_York
volumes:
- /var/lib/docker/volumes/emby/_data:/config
- /mnt/music:/Music
network_mode: host
# ports:
# - 8096:8096
# - 8920:8920
restart: unless-stopped
There are a couple of things of note here. First, I set the UID to the same ID used on the NAS box for the FLAC files, and a GID to the one used on the NAS box so that family members had access to the files as well. Second, I mapped the config location to the host, of which the music area was an NFS mount to the NAS box.
Lastly, I was using host mode networking, which was needed so that the Multicast DLNA packets (M-SEARCH and NOTIFY) would be seen from the container. This allowed Emby to “see” the DLNA devices on my local network. This proved to be a difficult thing to setup under Kubernetes.
I ran the Kompose convert command and it generated a deployment, service, and some PVC definitions. Of course, I ran this on my Mac, where the mount points and config area did not exist, so there were warnings and things were not setup as desired. But, it was useful, as it gave me an idea of how I wanted to define things.
I created a single manifest that incorporated some of what Kompose generated, sprinkled with settings I wanted, and setting up the volumes to use NFS, instead of PVC using Kubernetes storage. Here’s what I came up with (shown in parts) placed into ~/workspace/kubernetes/emby/k8s-emby.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: emby
---
I wanted all the music server stuff in a separate namespace.
apiVersion: v1
kind: Service
metadata:
name: emby-service
namespace: emby
spec:
type: LoadBalancer
selector:
app: emby
ports:
- name: http
port: 8096
targetPort: 8096
protocol: TCP
- name: https
port: 8920
targetPort: 8920
protocol: TCP
---
A service is defined, with type LoadBalancer, so that I can access the media service with a well-known IP. I used the defaults that Emby suggested for HTTP and HTTPS access to the server.
apiVersion: apps/v1
kind: Deployment
metadata:
name: emby
namespace: emby
spec:
replicas: 1
selector:
matchLabels:
app: emby
template:
metadata:
labels:
app: emby
spec:
containers:
- name: emby
image: emby/embyserver_arm64v8:latest
env:
- name: UID
value: "1000"
- name: GID
value: "1003"
- name: GIDLIST
value: "1003"
- name: TZ
value: "America/New_York"
ports:
- containerPort: 8096
protocol: TCP
- containerPort: 8920
protocol: TCP
volumeMounts:
- name: config
mountPath: /config
- name: music
mountPath: /Music
restartPolicy: Always
volumes:
- name: config
nfs:
server: IP_OF_MY_NAS
path: /music/config
- name: music
nfs:
server: IP_OF_MY_NAS
path: /music/music
For the deployment, I used the same namespace and defined to use a single pod with Emby. which will create a single pod with the latest ARM64 version of Emby (happens to be 4.8.8.0. Could have pinned to a specific version and then update, as desired, by looking at hub.docker.com). The environment settings for UID, GID, GIDLIST, and TZ are passed in to the pod. I looked at the Docker version of the latest Emby to see that it had some different settings than my (much older) version. The HTTP and HTTPS ports for Emby are defined and match the service.
Lastly, I defined a volume for config settings and another for the music repo and mapped those to the IP and share locations of my NAS. The NAS already had a share called “music”, with a directory called “music”, containing directories for all of the artists, which in turn had directories for the artist’s albums, and then FLAC files for the album songs. I created a directory in the music share, called “config” to hold the configuration settings
With this setup, we are ready to apply the manifest and configure the Emby server…
Emby Startup
After doing “kubectl apply -f k8s-emby.yaml”, I could see that there was one pod running and a service with an IP from my load balancer pool. From a browser, I navigated to http://<emby-service-ip>:8096/ and could see the Emby setup wizard. I picked the language (“English”), and created my Emby user and password.
On the “Setup Media Libraries” page, I clicked the button for “New Library”, selected the type “Music”, gave it a name “Music”, and then clicked on the folder “Add” button, selected the “/Music” directory that maps to the NFS share, and clicked “OK”. Lastly, under “Music Folder Structure”, I picked the item “Perfectly ordered into artist/album folders, with tracks directly in the album folders”, and pressed the “OK” button.
You can click on the Advanced selector at the top right of the page and then choose some other options, if desired.
On the next screens, I skipped the metadata language info (as it was fine), kept the default port mapping selection, accepted the terms of use, and clicked on the finished button.
At this point, I could click on the “Manual Login” and log in with the credentials I set up. Under the settings (gear at top right of screen), I did some more settings.
Under “Network”, I set the “LAN Networks” to the CIDR for my local network. Under “DNLA”, I checked the “Enable DNLA Server” box and chose my user under the “Default User” entry. Under “Plugins”, I clicked the “Catalog” button, and under General, installed the Sonos plugin.
With these changes, I clicked on the “Dashboard” button, clicked on the power button icon at the top, and selected to restart the Emby server to apply all the changes.
Partial success…
As-is, I can now access the URL (port 8096) from my web browser on my Mac or phone, and select and play music. However, The “Play On” menu (square box at the top right of each page), only has the selection of the web browser I’m using. I cannot see my Sonos speaker, receiver that has DLNA support, or any other DNLA devices.
I found out that the issue is with how DLNA works. From my basic understanding, the DLNA server will multicast M-SEARCH UDP packets to 239.255.255.250, using port 1900. DLNA devices will multicast NOTIFY UDP packets to the same address and port. When I was using a Docker container, the container was using Host networking, and thus was using the same IP as the host, which is on my local network.
With Kubernetes, the Emby pod is running on the pod network (10.233.0.0/18), whereas all the DLNA devices are on the local network, and these multicast packets will not traverse subnets.
I tried one solution, and that was to add to the deployment’s template spec “hostNetwork: true”. Now, the pod is on the local network and DLNA multicasts are seen and the Emby server can Play On devices like my Sonos. The problem here is that the pod has the same IP as the node that it was deployed on. This makes it hard to use, as the pod could be re-deployed on another node. Yeah, I could force it to one node, but if that node failed, I’d loose the Emby server.
Houston We Have Lift-Off!
I found that I can setup two interfaces on the pod, by using Multus. The plan is to create a second interface on the pod that is on the local network so that it can send/receive DLNA multicasts communicating with the DNLA devices. This requires several steps…
First, we need to install Multus. Fortunately, Multus works well with Calico, which I’m using on my network. Unfortunately, I could not use the “quick start” install methods for Multus on my arm64 Raspberry PI hardware. To get this installed, I first pulled the Multus repo:
git clone https://github.com/k8snetworkplumbingwg/multus-cni.git
cd multus-cni/deployments
I used the multus-daemonset.yml to create a daemonset that will install Multus on each node. However, the two image: lines need to be changed, as “ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot” is not for the arm64 platform. I think they have some multi-platform support, maybe with annotations, but I didn’t know how to set that up. So, I went to the Github Container Registry for Multus, clicked on the “OS/Arch” tab and then selected the image for arm64 and noted the version. In multus-daemonset.yml, I changed the image version:
diff --git a/deployments/multus-daemonset.yml b/deployments/multus-daemonset.yml
index 40fa5193..fa8bde5c 100644
--- a/deployments/multus-daemonset.yml
+++ b/deployments/multus-daemonset.yml
@@ -179,7 +179,7 @@ spec:
serviceAccountName: multus
containers:
- name: kube-multus
- image: ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot
+ image: ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot-debug@sha256:351652b583600b0d0d704269882fd2fa53395c5ce4602a76a2799960b2c06dce
command: ["/thin_entrypoint"]
args:
- "--multus-conf-file=auto"
@@ -204,7 +204,7 @@ spec:
mountPath: /tmp/multus-conf
initContainers:
- name: install-multus-binary
- image: ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot
+ image: ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot-debug@sha256:351652b583600b0d0d704269882fd2fa53395c5ce4602a76a2799960b2c06dce
command: ["/install_multus"]
args:
- "--type"
Now, I can “kubectl apply -f multus-daemonset.yml” to install the daemonset. Once done, I checked that the pods are running on each node:
kubectl get pods --all-namespaces | grep -i multus
kube-system kube-multus-ds-4btnp 1/1 Running 0 20h
kube-system kube-multus-ds-6p9vx 1/1 Running 0 20h
kube-system kube-multus-ds-mzb4b 1/1 Running 0 20h
kube-system kube-multus-ds-s7d8v 1/1 Running 0 20h
kube-system kube-multus-ds-twn6k 1/1 Running 0 20h
kube-system kube-multus-ds-vqxh8 1/1 Running 0 20h
kube-system kube-multus-ds-wwnbj 1/1 Running 0 20h
On a node, you can check that there is a /etc/cni/net.d/00-multus.conf
file as the lexically first file. Now, a network attachment definition can be created (I added it to the k8s-emby.yaml file, after the namespace definition):
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
namespace: emby
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "10.11.12.0/24",
"rangeStart": "10.11.12.211",
"rangeEnd": "10.11.12.215",
"routes": [
{ "dst": "10.11.12.0/24" }
],
"gateway": "10.11.12.1"
}
}'
In the metadata, I specified the “emby” namespace, so that this is visible by the emby pod. The config section is a CNI configuration. Of note is that master attribute is the interface name for the pod’s main interface (with IP on pod network). For IPAM, I used the CIDR of my local network as the subnet, and used a range of IPs that is outside of any DHCP pool, LoadBalancer pool, and existing static IPs. I set the route destination as the local network (not a default route, so that it doesn’t interfere with pod traffic).
The final step is to modify the Emby deployment so that when the Emby pod is created, it uses the new network attachment definition and creates two interfaces. This is done as an annotation under the deployment’s template metadata (added lines in red) of the k8s-emby.yaml file:
...
apiVersion: apps/v1
kind: Deployment
metadata:
name: emby
namespace: emby
labels:
app: emby
spec:
replicas: 1
selector:
matchLabels:
app: emby
template:
metadata:
labels:
app: emby
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
...
Now, when we apply the deployment, the pod will have two interfaces. The main interface, eth0, and the additional net1 interface:
3: eth0@if40: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1480 qdisc noqueue state UP qlen 1000
link/ether be:0f:38:6b:bc:ea brd ff:ff:ff:ff:ff:ff
inet 10.233.115.96/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::bc0f:38ff:fe6b:bcea/64 scope link
valid_lft forever preferred_lft forever
4: net1@tunl0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 22:76:a6:b2:34:9f brd ff:ff:ff:ff:ff:ff
inet 10.11.12.213/24 brd 10.11.12.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::2076:a6ff:feb2:349f/64 scope link
valid_lft forever preferred_lft forever
Now, I can access the UI at the service’s public address, and when I click on the “Play On” button, I see all the DLNA devices that receive streamed music. Yay!
Remote/Secure Access
Everything is working locally, but I wouldn’t mind being able to play music on my phone, when I’m away from home. I started planning this and decided on a few things:
-
- Use the domain name that I purchased and create subdomains for each app.
- Use the Dynamic DNS service that I purchased to map my domain to my home router.
- Configure the router to map HTTP and HTTPS requests to an Ingress controller, which will route the requests to the apps, based on the subdomain used.
- Use Let’s Encrypt so that all HTTPS requests have a valid certificate (from a Certificate Authority).
Prep Work
I have the domain registration (e.g. my-domain.com) and I have created subdomains for my apps (e.g. music.my-domain.com). I know my Dynamic DNS service domain name, so I created CNAME DNS records to point the domain and all the subdomains to that DDNS name.
With my router and the Dynamic DNS service, I have configured it so that the DDNS domain name is always pointing to my router’s WAN address (which is a DHCP address from my service provider and can change).
Kubernetes Work
With the external stuff out of the way (mostly), I could focus on connecting up the Kubernetes side. I installed Traefik for an ingress controller. I like this better than NGINX, because it works well with apps that are in namespaces:
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik traefik/traefik
This is running in the default namespace, but can be run in a specific namespace, if desired. I made sure the pod and service were running. For the service, get the LoadBalancer IP and setup your router configuration to forward HTTP and HTTPS requests to the IP of this service. That will cause all external requests to use the Traefik ingress for routing. Next, install the cert-manager:
helm repo add jetstack https://charts.jetstack.io --force-update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.15.0 \
--set crds.enabled=true
Obviously, you can use the latest version of cert-manager that is compatible with the Kubernetes version you are running. You’ll see pods, services, deployments, and replica sets created (and running) for the cert-manager.
I created a work area to hold manifests for the resources that will be created:
mkdir -p ~/kubernetes/traefik
cd ~/kubernetes/traefik
For the Let’s Encrypt certificates, we’ll test everything out with staging certificates, and then once that is all working, we can switch to production certificates. This is done, because there is rate-limiting on production certificates and we don’t want to have multiple failures to hit the limit and block us.
Here is the staging certificate (emby-issuer.yaml):
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: emby-issuer
namespace: emby
spec:
acme:
email: your-email-address
# We use the staging server here for testing to avoid hitting
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# if not existing, it will register a new account and stores it
name: emby-issuer-account-key
solvers:
- http01:
# The ingressClass used to create the necessary ingress routes
ingress:
class: traefik
Note that this is in the same namespace as the app, the staging Let’s Encrypt server is used, and you provide a contact email address. For the production Issuer, emby-prod-issuer.yaml, we have:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: emby-prod-issuer
namespace: emby
spec:
acme:
email: your-email-address
# We use the staging server here for testing to avoid hitting
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# if not existing, it will register a new account and stores it
name: emby-issuer-account-key
solvers:
- http01:
# The ingressClass used to create the necessary ingress routes
ingress:
class: traefik
Pretty much the same thing, only using the production Let’s Encrypt server, and a different name for the issuer. Do a “kubectl apply -f” for each of these and then do a “kubectl get issuer -A” to make sure they are ready. You can check “kubectl describe issuer -n emby emby-issuer” and under the Status section see that the staging issuer is registered and ready:
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Now, I create an Ingress for the app that will be used with the staging certificate:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: emby
namespace: emby
annotations:
cert-manager.io/issuer: "emby-issuer"
spec:
tls:
- hosts:
- music.my-domain.com
secretName: tls-emby-ingress-http
rules:
- host: music.my-domain.com
http:
paths:
- path: /emby
pathType: Prefix
backend:
service:
name: emby-service
port:
name: http
Of note is that there is an annotation that refers to the cert-manager staging issuer and the subdomain name that will be used for this Emby app is specified both as the host and in the TLS. If desired you can leave out the annotation and the TLS section and test out accessing the Emby app by using HTTP (e.g. http://music.my-domain.com/emby). That is what I did to make sure the ingress alone was OK.
This ingress will take requests to music.my-domain.com/emby/… and pass them to the service “emby-service” (running in namespace “emby”) using the port defined in the service with the name “http” (e.g. 8096).
By applying emby-ingress.yaml, you will initiate the process of creating a staging certificate. A certificate will be created, but not ready. This will trigger a certificate request and then an order. The order will create a challenge that will verify the challenge URL is reachable and then will obtain the certificate from Let’s Encrypt. Here are the get commands you can use for resources, and then you can do describe commands for the specific resources:
kubectl get certificate -A
kubectl get certificaterequest -A
kubectl get order -A
kubectl get challenge -A
It will take some time for all this to happen, but you can “describe” the challenge and check the other resources to see when they are valid/ready/approved. Don’t worry if you see a 404 status on the challenge initially. It should clear after 30 seconds or so.
When the challenge is completed successfully, the challenge resource will be removed and there will be a new secret with the name of your certificate (e.g. tls-emby-ingress-http) in the namespace of the app. This secret would be used for the certificate that users would see when accessing your domain. Granted, it is from the staging server, so there would be a warning about the validity, but now you can repeat the process with the production certificate and then visitors would see a valid certificate.
Here is the production ingress (emby-prod-ingress.yaml) that can be used with the production issuer that was previously created:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: emby
namespace: emby
annotations:
cert-manager.io/issuer: "emby-prod-issuer"
spec:
tls:
- hosts:
- music.my-domain.com
secretName: emby-prod-cert
rules:
- host: music.my-domain.com
http:
paths:
- path: /emby
pathType: Prefix
backend:
service:
name: emby-service
port:
name: http
I used different names for the issuer and secret, so that there was no conflict with the staging ones. You can delete the staging ingress, issuer, and secret, once this is working. Here is output of a successful production certificate:
kubectl get cert -n emby
NAME READY SECRET AGE
emby-prod-cert True emby-prod-cert 122m
kubectl get certificaterequest -n emby
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
emby-prod-cert-1 True True emby-prod-issuer system:serviceaccount:cert-manager:cert-manager 122m
kubectl get order -n emby
NAME STATE AGE
emby-prod-cert-1-2302305457 valid 122m
Forcing HTTPS
Right now, it is possible to use both http://music.my-domain.com/emby/ and https://music.my-domain.com/emby/. I would like to redirect all HTTP requests to HTTPS. To do that, I’ll use the Traefik redirect middleware by creating this manifest (redirect2https.yaml):
# Redirect to https
apiVersion: v1
kind: Namespace
metadata:
name: secureapps
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect2https
namespace: secureapps
spec:
redirectScheme:
scheme: https
As you can see, I have the namespace “secureapps”. If desired, you can use the namespace for a single app (e.g. “emby”), if you only want the redirection to apply there. You can alternatively modify the Traefik Helm chart (do a “helm show values traefik/traefik > values.yaml” and set ports.web.redirectTo: websecure” and then update the chart) to apply to all ingresses. I have not tried that method.
Now, we update the ingress manifest to add this annotation (this is the top of the emby-ingress.yaml):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: emby
namespace: emby
annotations:
cert-manager.io/issuer: "emby-issuer"
traefik.ingress.kubernetes.io/router.middlewares: secureapps-redirect2https@kubernetescrd
spec:
The new line is in red and has the namespace-middleware (secureapps-redirect2https). If you wanted this to apply only to Emby, you could change the namespace of the Middleware to “emby” and use “emby-redirect2https”.
I deleted the ingress I was currently using, deleted the secret for the cert that was generated, and then applied the Middleware manifest and the ingress that was updated. A new cert was created and now, HTTP requests are redirected to HTTPS!
Removing The Remote Access Setup
Delete the issuers and ingress that you have (e.g. use kubectl delete -f emby-prod-ingress.yaml). You can then remove Traefik:
helm delete traefik
And the cert-manager:
helm delete -n cert-manager cert-manager
kubectl delete crd virtualservers.k8s.nginx.org
kubectl delete crd virtualserverroutes.k8s.nginx.org
kubectl get crd | grep cert | cut -f1 -d" " | xargs kubectl delete crd
And finally any secrets that were created for certificates:
kubectl delete -n emby secret emby-issuer-account-key emby-prod-cert tls-emby-ingress-http
Things To Explore
Traefik also has an IngressRoute mechanism that seems to be quite flexible and a lot of their documentation (like for Let’s Encrypt setup) uses this instead of Ingress. It may be worthwhile using that as, at first blush, it seems like a newer way of doing things.
Consider removing the /emby prefix from the path for accessing remotely.
Consider using one certificate for all sub-domains.