July 10

Django App on Kubernetes

Viewmaster

For all the movies I own (500+), I had a spreadsheet listing them, so that when people visited, they could pick out a movie for us to watch. It was tedious, as I’d have to print it, or bring up the spreadsheet, and then, if they wanted to see a comedy, for example, I would sort by the “genre” column.

Wanting a better way to use this list of movies, I decided to make a web site with the information that would shows the title, genre, release date, rating, duration, and format (4K, Blu-ray, DVD). There were buttons to display the movies in multiple ways:

  • Alphabetical (e.g do I have “The Matrix”?)
  • Genre, then alphabetical (e.g. what comedy movies do I have?)
  • Genre, date, then alphabetical (e.g. what are the newest “SCI-FI” movies?)
  • Date, then alphabetical (e.g. what are the new releases that I have?)
  • Collection, then date (e.g. Die Hard movies in order)
  • Format, then alphabetical (e.g. what 4K movies do I have?)

There is a search box to look for a specific title, an option to see more details one each movie (aspect ratio, audio, cost, and collection keyword), and an option to include Laser Discs. I don’t have a LD player anymore, but I use the covers of the movies as wall hangings and still have about 60 discs.

I created a Django app or the web site, set it up to run in a Docker container, and made a script to import the spreadsheet info I had, into the movie database. This ran on a Raspberry Pi4 and was accessible locally on my network.

Now that I have a Kubernetes cluster, I want to port this web based Docker app into my cluster.

 

The Plan…

Here are the goals for this effort:

  • Use a deployment with one instance of the app running on a pod.
  • Instead of having a SQLite database in a file on the host, use a database like Postgres.
  • Have the database of movie information in Longhorn storage, so I can back it up.
  • Put confidential info into Secrets. Don’t have anything confidential in the app.
  • (Optionally) Make this web app accessible from outside my home, using HTTPS (make use of the NGINX Virtual Server I’ve already set up for my Emby music server).
  • Use a separate namespace for this app, rather than the “default”, to isolate things.

I found some videos on how to port Django apps to Kubernetes, and each were doing things slightly differently. So I used one method, sprinkled in some ideas from other methods, and added some more things that I wanted. Let’s get started on the journey…

 

Collect Together The Needed Items

First, I cloned the docker implementation of my app into my work area for Kubernetes. This has the typical Django development tree structure, plus a Dockerfile I used to package things up, and the SQLite3 database file that was used by that implementation (the Dockerfile mapped the ./DBase/movies.db file from the GIT repo on the host, to a mount point in the container – this way I could backup the database periodically).

You can take whatever Django app you have to do the same porting effort, whether it has a Docker setup or not. Here is my viewmaster app as an example Django app:

cd ~/workspace/kubernetes/
git clone https://github.com/pmichali/viewmaster.git
cd viewmaster
mkdir deploy

The master branch has the code right before I started the porting effort. The k8s-port branch has any app code changes, and the manifests and supporting files that I used to port to Kubernetes.

 

Prepare Settings

Create an environment file with the values you want for secrets (viewmaster-secrets.env):

cd deploy

SECRET_KEY='a unique string that django will use'
DB_HOST=viewmaster-postgres
POSTGRES_DB=name-of-your-database
POSTGRES_USER=name-for-your-db-user
POSTGRES_PASSWORD='pass-you-want-for-database'
PUBLIC_DOMAIN=movies.my-domain.com

The first is a secret key used for cryptographic signing in Django. The last one is for app use, and the others are for the database (fill in the items in red). Create the secrets and then remove the file:

kubectl create namespace viewmaster
kubectl create secret generic viewmaster-secrets -n viewmaster --from-env-file=viewmaster-secrets.env
rm viewmaster-secrets.env

Create a config map, which has settings for both Django and a Postgres database (viewmaster-configmap.yaml):

apiVersion: v1
kind: ConfigMap
metadata:
name: viewmaster-cm
namespace: viewmaster
data:
ALLOWED_HOSTS: "*"
LOGLEVEL: "info"
DEBUG: "0"
PGDATA: "/var/lib/postgresql/data/db-files/"

Of note is PGDATA, which tells Postgres to use a directory below the mount point that we will create, so that Postgres will not complain about a non-empty directory (it will have a .lost-found directory). Do a “kubectl apply -f viewmaster-configmap.yaml” to create the config map.

 

Deploy The Database

I created a manifest (postgres.yaml) with everything needed to deploy the Postgres database that I want to use:

apiVersion: v1
kind: Service
metadata:
  name: viewmaster-postgres
  namespace: viewmaster
  labels:
    app: viewmaster
spec:
  ports:
    - port: 5432
  selector:
    app: viewmaster
    tier: postgres
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: viewmaster-postgres-pvc
  namespace: viewmaster
  labels:
    app: viewmaster
spec:
 accessModes:
   - ReadWriteOnce
 resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: viewmaster
labels:
app: viewmaster-postgres
spec:
selector:
matchLabels:
app: viewmaster
tier: postgres
strategy:
type: Recreate
template:
metadata:
labels:
app: viewmaster
tier: postgres
spec:
volumes:
- name: viewmaster-data
persistentVolumeClaim:
claimName: viewmaster-postgres-pvc
containers:
- image: postgres:16.3-alpine
name: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: viewmaster-data
mountPath: /var/lib/postgresql/data
envFrom:
- secretRef:
name: viewmaster-secrets
- configMapRef:
name: viewmaster-cm

First, we have the service that will use port 5432 with no IP assigned. Second, is the 10 GB persistent volume claim using our default Longhorn storage. Finally, we have the deployment with a container using a current version of Postgres, referencing port 5432, and mounting using the PVC defined for the mount of the data area Postgres uses. The environment settings used by Postgres will come from the secret and config map created.

Do a “kubectl apply -f postgres.yaml”. There should be a deployment, replicaset, service and pod running for Postgres. In addition, there will be a 10 GB PV created and a claim.

 

Modify App To Use Environment Variables

In preparation to running things under Kubernetes, we want to remove the hard coding of secrets and other confidential information from the Django application, and obtain the values from environment variables that will be passed in. For the Viewmaster app, I moved to the movie_library/movie_library/ area in the repo and edited settings.py to change/add these lines:

import os

SECRET_KEY = os.environ.get('SECRET_KEY', 'changeme')

DEBUG = bool(int(os.environ.get('DEBUG', 0)))

ALLOWED_HOSTS = []
ALLOWED_HOSTS.extend(
filter(
None,
os.environ.get('ALLOWED_HOSTS', '').split(','),
)
)

MIDDLEWARE = [
...
'whitenoise.middleware.WhiteNoiseMiddleware',
]

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'HOST': os.environ.get('DB_HOST'),
'NAME': os.environ.get('POSTGRES_DB'),
'USER': os.environ.get('POSTGRES_USER'),
'PASSWORD': os.environ.get('POSTGRES_PASSWORD'),
}
}

STATIC_URL = 'static/'
STATIC_ROOT = '/vol/web/static'
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'

We get the secret key, debug flag, and allowed hosts from environment variables passed to the app at startup. The database engine is set to Postgress and environment variables used for the host, database name, username, and password (removing what was there for SQLite). I could have used database agnostic names for these, but since they are shared with the Postgres pod, I used the same names (versus duplicating entries).

Because I switched from Django’s “runserver” to “gunicorn” and I’m not running in debug mode, I had to add the Whitenoise middleware, and specify STATIC_ROOT and STATICFLES_STORAGE, so that static files could be located.

Since I didn’t want to have the movie listing to require the path /viewmaster/, I changed the urlpattern in urls.py in the ./movie_library/movie_library/ area of the repo, to use the root of the HTML tree:

 urlpatterns = [
- path('viewmaster/', include('viewmaster.urls')),
+ path('', include('viewmaster.urls')),

Another cleanup item in the Viewmaster project, is an unused sqlalchemy import in ./movie_library/viewmaster/views.py (my bad). When converting over to Kubernetes, we won’t be including that package, so delete the import.

The latest code in the k8s-port branch of the repo has all these changes.

 

Build Image For Kubernetes

The next goal is to create a docker image for the Django app. I already have a Dockerfile at the top of the repo (~/workspace/kubernetes/viewmaster/), so I’ll just modify it to look like this:

FROM python:3.12.4

# Python and setup timezone
RUN apt-get update -y && apt-get install -y software-properties-common python3-pip postgresql-client

# Fault handler dumps traceback on seg faults
# Unbuffered sends stdout/stderr to log vs buffering
ENV CODEBASE=/code \
PYTHONENV=/code \
PYTHONPATH=/code \
EDITOR=vim \
PYTHONFAULTHANDLER=1 \
PYTHONUNBUFFERED=1 \
PYTHONHASHSEED=random \
PIP_NO_CACHE_DIR=off \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
POETRY_VERSION=1.8.2

# System dependencies
RUN pip3 install "poetry==$POETRY_VERSION"

# Copy over all needed files
WORKDIR /code
COPY poetry.lock pyproject.toml runserver.bash /code/
COPY movie_library/ /code/movie_library/

# setup tools for environment, using pyproject.toml file
RUN poetry config virtualenvs.create false && \
poetry install

EXPOSE 80

# CMD sleep infinity
CMD ["/code/runserver.bash"]

I included the Postgres client package, in case I wanted to access the database from this pod (it is included in the Postgres pod we already created). I removed the user account setup lines, and added a line to expose port 80. Other things to consider, when doing this, is whether you want to update the Python base image version, and the Poetry version.

There are two other related changes. The runserver.bash file, in the same area, was changed to this:

#!/bin/bash
cd /code/movie_library
python manage.py collectstatic --noinput
python manage.py migrate
gunicorn -b :8080 movie_library.wsgi:application

Instead of running the built-in Django server, the script now does collectstatic, migration, and then runs the gunicorn server for our Python app using port 8080 (instead of 8642).

The pyproject.toml file, which contains the package definitions used, is changed to contain:

[tool.poetry]
name = "viewmaster"
version = "0.1.1"
description = "My movies"
authors = ["YOUR NAME <YOUR_EMAIL_ADDRESS>"]
readme = "README.md"
package-mode = false

[tool.poetry.dependencies]
python = "^3.12"
django = "^4.2.14"
django-auditlog = "^2.3.0"
psycopg = "^3.2.1"
gunicorn = "^22.0.0"
whitenoise = "^6.7.0"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

I bumped the minor version number. The xlrd, openpyxl, sqlalchemy, and pandas packages are removed and the psycopg, gunicorn, and whitenoise packages are added. On your host, you can do ‘poetry update’ and if needed, update versions in the pyproject.toml file for the versions you are using. When the docker image is created, it will install these packages into container and setup PATH to reference the environment.

Now, from the top of the repo, we can build the docker image locally with:

docker buildx build . -t YOUR_DOCKER_ID/viewmaster-app:v0.1.1

With that completed, and assuming you have an account setup on Docker Hub, you can push the image up to your account:

docker push YOUR_DOCKER_ID/viewmaster-app:v0.1.1

It’s a good idea to use a different version, each time you update your app, so that when you deploy into Kubernetes it will download the updated image (assuming you update the deployment version, of course). Initially, I was using “latest”, but I had to set the image pull policy for the container to “Always”, instead of “IfNotPresent”.

 

Deploy The Django App

In the ./deploy/ area, create a manifest (django.yaml), to deploy the Viewmaster app:

apiVersion: v1
kind: Service
metadata:
name: viewmaster-service
namespace: viewmaster
labels:
app: viewmaster
spec:
ports:
- port: 8000
targetPort: 8080
name: http
selector:
app: viewmaster
tier: app
type: LoadBalancer

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: viewmaster-app-pvc
namespace: viewmaster
labels:
app: viewmaster
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

---

apiVersion: apps/v1
kind: Deployment
metadata:
name: viewmaster
namespace: viewmaster
labels:
app: viewmaster
spec:
selector:
matchLabels:
app: viewmaster
tier: app
strategy:
type: Recreate
template:
metadata:
labels:
app: viewmaster
tier: app
spec:
volumes:
- name: viewmaster-app-data
persistentVolumeClaim:
claimName: viewmaster-app-pvc
containers:

- image: pmichali/viewmaster-app:v0.1.1
imagePullPolicy: Always # IfNotPresent
name: app
ports:
- containerPort: 8080
name: app
volumeMounts:
- name: viewmaster-app-data
mountPath: /vol/web
envFrom:
- secretRef:
name: viewmaster-secrets
- configMapRef:
name: viewmaster-cm

We create a service, listening on port 8000, and using load balancer for a “public” IP. A persistent volume of 10 GB will be used for the app. Finally, a deployment with the container image that was built, a volume mapping for data, and environment information from the config map and secrets defined.

Note that I’m setting it to pull the image “Always”, because I’m going through iterations. Once done, you can set this to IfNotPresent. Otherwise, you are forced to update the version tag, and build/push with the new tag, for each iteration.

Do a “kubectl apply -f django.yaml” and make sure the pod is running. You can setup the superuser account by exec-ing into the viewmaster pod and running the createsuperuser command. For example:

kubectl exec -it -n viewmaster viewmaster-6c956ddb66-sxq4f -- /bin/bash
cd movie_library
python manage.py createsuperuser

Enter in a username, email address, and password. While in the pod, you can access the database with the database shell command:

python manage.py dbshell

From here, you can view all the tables that were created, when the viewmaster app was started, by doing “\dt”:

viewmasterdb=# \dt
List of relations
Schema | Name | Type | Owner
--------+----------------------------+-------+--------------
public | auditlog_logentry | table | viewmasterer
public | auth_group | table | viewmasterer
public | auth_group_permissions | table | viewmasterer
public | auth_permission | table | viewmasterer
public | auth_user | table | viewmasterer
public | auth_user_groups | table | viewmasterer
public | auth_user_user_permissions | table | viewmasterer
public | django_admin_log | table | viewmasterer
public | django_content_type | table | viewmasterer
public | django_migrations | table | viewmasterer
public | django_session | table | viewmasterer
public | viewmaster_movie | table | viewmasterer
(12 rows)

You can verify that the superuser account is correct, with the “select * from auth_user;” command. This shell can be used to import existing movie data…

 

Import Existing Data

Rather than re-enter all the movie information into this new Kubernetes based implementation, I wanted to export/import what I already have. In the repo I provided, there is a ./DBase/importVM.sql file with the data to import for my app, but I want to detail how this was created, as it wasn’t exactly trivial.

The Docker implementation had a SQLite database in ./DBase/movies.db. The first step was to export the database as a .sql file. I did the following:

cd DBase
sqlite3
.open movies.db
.once export.sql
.dump
.quit

From the export.sql file, I want the “viewmaster_movies” table. I created the file (importVM.sql) with the INSERT lines for that table from the export.sql file, all wrapped inside of “BEGIN TRANSACTION;” and “COMMIT;” lines, so that the Progres database would only be updated if all the lines could be processed:

BEGIN TRANSACTION;
INSERT INTO viewmaster_movie VALUES(1,'12 Monkeys',1995,'SCI-FI','02:10:00.000000','LD','LB','D-SURR','',25,1,0,'R');
...
INSERT INTO viewmaster_movie VALUES(656,'Shawshank Redemption',1994,'DRAMA','02:22:00','4K','1.85:1','DTS-HD','',20.39000000000000056,1,0,'R');

COMMIT;

Unfortunately, there are differences between SQLite and Postgres. If we look at the field layout in the Postgres database, we see (trimmed):

viewmasterdb=# \d viewmaster_movie
Table "public.viewmaster_movie"
Column | Type | Nullable |
------------+------------------------+----------+
id | bigint | not null |
title | character varying(60) | not null |
release | integer | not null |
category | character varying(20) | not null |
rating | character varying(5) | not null |
duration | time without time zone | not null |
format | character varying(3) | not null |
aspect | character varying(10) | not null |
audio | character varying(10) | not null |
collection | character varying(10) | not null |
cost | numeric(6,2) | not null |
paid | boolean | not null |
bad | boolean | not null |

When I look at the table definition (reformatted for readability) in the export.sql file, I see:

CREATE TABLE IF NOT EXISTS "viewmaster_movie" (
"id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"title" varchar(60) NOT NULL,
"release" integer NULL,
"category" varchar(20) NOT NULL,
"duration" varchar(5) NULL,
"format" varchar(3) NULL,
"aspect" varchar(10) NULL,
"audio" varchar(10) NULL,
"collection" varchar(10) NULL,
"cost" decimal NULL,
"paid" bool NULL,
"bad" bool NULL,
"rating" varchar(5) NULL
);

As you can see, the rating field is in a different position. This means that it will be in the wrong place in the existing INSERT lines, as the Postgres database is expecting the rating to be the fifth field and not the last field:

INSERT INTO viewmaster_movie VALUES(1,'12 Monkeys',1995,'SCI-FI','02:10:00.000000','LD','LB','D-SURR','',25,1,0,'R');

I decided that the easiest way to deal with this, is to add the ordering to the INSERT lines (added text in red), so they each look like this:

INSERT INTO viewmaster_movie ("id", "title", "release", "category", "duration", "format", "aspect", "audio", "collection", "cost", "paid", "bad", "rating")
VALUES(1,'12 Monkeys',1995,'SCI-FI','02:10:00.000000','LD','LB','D-SURR','',25,1,0,'R');

Essentially, we’re telling the insert command the order of the fields, rather than assuming they are in the same order as defined in the database. There can be cases, where in your new database, you named fields (or tables) differently, so this specification of fields can help.

Another issue is that SQLite export of boolean values use the numbers zero and one, whereas Postgres thinks these are integers. I ended up using my editor to wrap the values in single quotes (‘0’ and ‘1’), so that they are evaluated as boolean values. I made use of Emacs macros to do this quoting of the second and third from last values. I read later that one can change 0 to 0::boolean and 1 to 1::boolean.

With the importVM.sql file hopefully ready, I copied it to the viewmaster pod:

kubectl cp importVM.sql viewmaster/viewmaster-6c956ddb66-sxq4f:movie_library/importVM.sql

From the database shell that I have open on the viewmaster pod, I can import the table contents:

viewmasterdb=# \i importVM.sql

There is a good chance that this may fail, so you’ll have to scroll through the output and find any problems and correct them. In my case, I saw:

  • One entry had a value of ‘2’ for a boolean, had to change to ‘1’.
  • A few entries where the “audio” field was longer than the defined 10 chars max. Shortened them.
  • There were some cases of aspect ratio 16:9, which were treated as a time value with extra characters for seconds/microseconds and exceed width. Changed to “16×9”.
  • Another that had ans aspect ratio of “02:40:01.000000”, again value was treated as a time value. Changed to “2.40:1”.

Finally, the import was successful and I could do a “select * from viewmaster_movie;” from the database shell to see the entries. I’ve included the final ~/DBase/importVM.sql file in the repo, so that if you are following along, you can just import it.

Now, with some real data and a user account, we can get the IP of the service:

kubectl get svc -n viewmaster
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
viewmaster LoadBalancer 10.233.1.98 10.11.12.207 8000:30761/TCP 168m
viewmaster-postgres ClusterIP None <none> 5432/TCP 17h

With a browser, I can navigate to the app at http://10.11.12.207:8000/viewmaster/ and see all the existing movies.

UPDATE: ASee “Create Movie Issue” below, for another problem that I found, after importing and using the system.

 

Secure Remote Access

Just like I did with the Emby music server I setup under Kubernetes, I want to do the same thing for this Django app. There are already some pieces in place, namely Traefik ingress is running in the cluster to route external requests to the app and redirect HTTP requests to HTTPS, cert-manager is running to create and manage Let’s Encrypt certificates, and the router is directing external HTTP/HTTPS requests to the ingress controller.

Prep Work

Specific to this Django app, there are some things that need to be set up. Like done in the Emby post, I need to create another sub-domain for this app (e.g. music.my-domain.com), and create a CNAME record that points to the Dynamic DNS service I use, so that HTTP/HTTPS requests to that subdomain will also make it to Kubernetes.

For my Django app, I had already installed the recommended security middleware. However, at a minimum, one also needs to define the “trusted origin” domains so as not not trigger the Cross Site Request Forgery (CSRF) warnings. I had to add the following line to ./movie_library/movie_library/settings.py:

CSRF_TRUSTED_ORIGINS = ['https://' + os.environ.get('PUBLIC_DOMAIN', 'missing-domain-name')]

Now, depending on how you wrote your Django app and what external resources you use, you may need to configure other CSRF settings. The easiest (?) way to figure out what you need, is to exercise your site via HTTPS with Django running in debug mode, and then it will show any CSRF errors and will provide a link with more info on the problem and how to fix it. Here is an example from one (non-Django) site I had:


CSP_IMG_SRC = ("'self'")
CSP_DEFAULT_SRC = ("'self'")
CSP_STYLE_SRC = ("'self'", 'https://fonts.googleapis.com')
CSP_SCRIPT_SRC = ("'self'")
CSP_FONT_SRC = ("'self'", 'https://fonts.gstatic.com')
CSP_FRAME_ANCESTORS = ("'none'")
CSP_FORM_ACTION = ("'self'")

These are indicating the allowed sources for various resources accessed.

Obviously, you’ll need to do this AFTER you have HTTPS remote access running, and it may take several iterations to resolve all the issues. That is why I set the image pull policy to “Always”, instead of “IfNotPresent” in the Deployment manifest for my app. This way, I can change the app, re-build, re-push to hub.docker.com, and then delete my viewmaster pod and it will pull the new image and use it.

Otherwise, you need to update the minor version in the ./pyproject.toml, build/push the app with a new tag, and change the deployment to reference the newer tag.

Ready, Set, Go…

Now, I need to perform the steps to create a certificate and to hookup ingress to my app. The explanation is brief, but you can see a more detailed description in the Emby post.

I’ll again use a Let’s Encrypt staging certificate, and once things are working, will use the production certificate. There is a rate limit on production certificates, so if you mess things up and try too many times, you’ll get locked out for a week!

Here is the staging issuer that I created and applied (./deploy/viewmaster-issuer.yaml):

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: viewmaster-issuer
namespace: viewmaster
spec:
acme:
email: your-email-address
# We use the staging server here for testing to avoid hitting
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# if not existing, it will register a new account and stores it
name: viewmaster-issuer-account-key
solvers:
- http01:
# The ingressClass used to create the necessary ingress routes
ingress:
class: traefik

This is in the same namespace as the app, requires an email address, and is using the staging certificate. With that applied, we can create the ingress for the app (./deploy/viewmaster-ingress.yaml):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: viewmaster
namespace: viewmaster
annotations:
cert-manager.io/issuer: "viewmaster-issuer"
traefik.ingress.kubernetes.io/router.middlewares: secureapps-redirect2https@kubernetescrd
spec:
tls:
- hosts:
- movies.my-domain.com
secretName: tls-viewmaster-ingress-http
rules:
- host: movies.my-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: viewmaster-service
port:
name: http

This references the issuer, uses the middleware to force HTTP to HTTPS redirect, has the subdomain name that I’ll use, and gives a name for the secret used to hold the staging certificate. It points to the viewmaster service and that uses the /viewmaster path. Once applied, you can look for the tls-viewmaster-ingress-http cert in the viewmaster namespace to be ready. Look through the info on the Emby page for details on the certificate creation process. It’ll take a minute or so to complete.

Now you can go to https://viewmaster.my-domain.com/viewmaster/ and see the site. If use use HTTP, it should redirect. Your browser will warn that it is insecure, but you can continue and look at the certificate info to see that it is a Let’s Encrypt staging certificate.

With it working, you can delete the ingress, secret, and issuer(if desired) and then apply the production issuer (./deploy/viewmaster-prod-issuer.yaml):

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: viewmaster-prod-issuer
namespace: viewmaster
spec:
acme:
email: your-email-address
# We use the staging server here for testing to avoid hitting
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# if not existing, it will register a new account and stores it
name: viewmaster-issuer-account-key
solvers:
- http01:
# The ingressClass used to create the necessary ingress routes
ingress:
class: traefik

I used a different name, so that both issuers can be present at the same time. You provide an email address, and it is using the production Let’s Encrypt URL.

The production ingress (./deploy/viewmaster-prod-ingress.yaml):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: viewmaster
namespace: viewmaster
annotations:
cert-manager.io/issuer: "viewmaster-prod-issuer"
traefik.ingress.kubernetes.io/router.middlewares: secureapps-redirect2https@kubernetescrd
spec:
tls:
- hosts:
- movies.my-domain.com
secretName: viewmaster-prod-cert
rules:
- host: movies.my-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: viewmaster-service
port:
name: http

This is the same, only using the viewmaster-prod-issuer, and viewmaster-prod-cert certificate. Once applied and the certificate is created, you can access with HTTPS, without any insecure warning.  The cert-manager will renew the certificate automatically, as needed.

With all this done, you can access the site via https://movies.my-domain.com, and if you use HTTP, it will automatically redirect to HTTPS. If you want to access from within the local network, you can use HTTP with the IP of the viewmaster service and port 8000. I didn’t explore into how to access it securely from inside the local network.

 

Create Movie Issue

In my playing with this ported app, I tried to add a movie. When I did so (under debug mode), I got an error saying:

duplicate key value violates unique constraint "viewmaster_movie_pkey"
DETAIL:  Key (id)=(1) already exists.

It looks like the database insert is not using the next ID. I did a “kubectl exec” into the viewmaster app pod, moved down to the movie_library/ directory, and did “python manage.py dbshell” to look at the database. First, I checked that there was a primary key for the viewmaster_movie database:

# \d viewmaster_movie;
Table "public.viewmaster_movie"
Column | Type | Collation | Nullable |...
------------+------------------------+-----------+-----------...
id | bigint | | not null |...
title | character varying(60) | | not null |
release | integer | | not null |
category | character varying(20) | | not null |
rating | character varying(5) | | not null |
duration. | time without time zone | | not null |
format | character varying(3) | | not null |
aspect | character varying(10) | | not null |
audio | character varying(10) | | not null |
collection | character varying(10) | | not null |
cost. | numeric(6,2) | | not null |
paid. | boolean | | not null |
bad | boolean | | not null |
Indexes:
"viewmaster_movie_pkey" PRIMARY KEY, btree (id)

That looked good, so I was trying to figure out how Postgres picks the next ID to use. I see that there is a “sequence” so I did:

# SELECT relname sequence_name FROM pg_class WHERE relkind = 'S';
sequence_name
-----------------------------------
django_migrations_id_seq
...
viewmaster_movie_id_seq

Looking at the sequence for the viewmaster_movie table, I see that the last_value is “1”, instead of the next value to use:

# select * from viewmaster_movie_id_seq;
 last_value | log_cnt | is_called
------------+---------+-----------
          1 |      32 | t

I determined the maximum value in use, and changed the last value to that:

# select max(id) from viewmaster_movie;
max
-----
656

# select setval('viewmaster_movie_id_seq', 656);
setval
--------
656

Now, when I do create, it works! Whew! I found out later, that with Postgres, you can set the id field type to “SERIAL” instead of “BIGINT” and that should create the correct sequencing. I haven’t tried it here, but it worked on a database for another Django app I was porting.

 

TODOs…

Future Items to consider:

  • Add non-admin login and modify app so that everyone has to login to see the pages (to limit viewing)?
  • Decide if want single cert for all subdomains running under Kubernetes, instead of one per app.
  • App enhancements:
    • See if can access public information for artwork and maybe description information for movies? Can we get technical specs too (run time, sound, aspect ratio)?
    • Persist checkbox settings for “Show details” and “Show LDs”.
    • Allow search initiation, when pressing enter, after entering in search phrase.
    • Add index (alphabet, category, date, collection, disk format) at top to allow jumping down to a section.
Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on Django App on Kubernetes
June 24

Media Server In Kubernetes

Another one of my apps running on a standalone Raspberry PI 4 (in a docker container), is the Emby media server. I ripped all my CDs to FLAC files and have been serving them up with Emby, so that I can play on my laptop, phone, Sonos speaker, and other DLNA devices. All the music is on a NAS box and I had it mounted on the Raspberry PI.

Now that I have a Kubernetes cluster of PIs, I wanted move the Emby server, and this looked like a good exercise on how to take a Docker container and run it on Kubernetes. There were some challenges, which made this harder than expected. Let’s go through the process though…

 

Migrating the Docker Container

After searching around, I found the a common way to migrate from Docker to Kubernetes was to use Kompose. I had this Dockerfile for Emby:

version: "2.3"
services:
emby:
image: emby/embyserver_arm64v8:latest
container_name: emby
environment:
- PUID=1000
- PGID=1003
- TZ=America/New_York
volumes:
- /var/lib/docker/volumes/emby/_data:/config
- /mnt/music:/Music
network_mode: host
# ports:
# - 8096:8096
# - 8920:8920
restart: unless-stopped

There are a couple of things of note here. First, I set the UID to the same ID used on the NAS box for the FLAC files, and a GID to the one used on the NAS box so that family members had access to the files as well. Second, I mapped the config location to the host, of which the music area was an NFS mount to the NAS box.

Lastly, I was using host mode networking, which was needed so that the Multicast DLNA packets (M-SEARCH and NOTIFY) would be seen from the container. This allowed Emby to “see” the DLNA devices on my local network. This proved to be a difficult thing to setup under Kubernetes.

I ran the Kompose convert command and it generated a deployment, service, and some PVC definitions. Of course, I ran this on my Mac, where the mount points and config area did not exist, so there were warnings and things were not setup as desired. But, it was useful, as it gave me an idea of how I wanted to define things.

I created a single manifest that incorporated some of what Kompose generated, sprinkled with settings I wanted, and setting up the volumes to use NFS, instead of PVC using Kubernetes storage. Here’s what I came up with (shown in parts) placed into ~/workspace/kubernetes/emby/k8s-emby.yaml:

apiVersion: v1
kind: Namespace
metadata:
name: emby
---

I wanted all the music server stuff in a separate namespace.

apiVersion: v1
kind: Service
metadata:
name: emby-service
namespace: emby
spec:
type: LoadBalancer
selector:
app: emby
ports:
- name: http
port: 8096
targetPort: 8096
protocol: TCP
- name: https
port: 8920
targetPort: 8920
protocol: TCP
---

A service is defined, with type LoadBalancer, so that I can access the media service with a well-known IP. I used the defaults that Emby suggested for HTTP and HTTPS access to the server.

apiVersion: apps/v1
kind: Deployment
metadata:
name: emby
namespace: emby
spec:
replicas: 1
selector:
matchLabels:
app: emby
template:
metadata:
labels:
app: emby
spec:
containers:
- name: emby
image: emby/embyserver_arm64v8:latest
env:
- name: UID
value: "1000"
- name: GID
value: "1003"
- name: GIDLIST
value: "1003"
- name: TZ
value: "America/New_York"
ports:
- containerPort: 8096
protocol: TCP
- containerPort: 8920
protocol: TCP
volumeMounts:
- name: config
mountPath: /config
- name: music
mountPath: /Music
restartPolicy: Always
volumes:
- name: config
nfs:
server: IP_OF_MY_NAS
path: /music/config
- name: music
nfs:
server: IP_OF_MY_NAS
path: /music/music

For the deployment, I used the same namespace and defined to use a single pod with Emby. which will create a single pod with the latest ARM64 version of Emby (happens to be 4.8.8.0. Could have pinned to a specific version and then update, as desired, by looking at hub.docker.com). The environment settings for UID, GID, GIDLIST, and TZ are passed in to the pod. I looked at the Docker version of the latest Emby to see that it had some different settings than my (much older) version. The HTTP and HTTPS ports for Emby are defined and match the service.

Lastly, I defined a volume for config settings and another for the music repo and mapped those to the IP and share locations of my NAS. The NAS already had a share called “music”, with a directory called “music”, containing directories for all of the artists, which in turn had directories for the artist’s albums, and then FLAC files for the album songs. I created a directory in the music share, called “config” to hold the configuration settings

With this setup, we are ready to apply the manifest and configure the Emby server…

 

Emby Startup

After doing “kubectl apply -f k8s-emby.yaml”, I could see that there was one pod running and a service with an IP from my load balancer pool.  From a browser, I navigated to http://<emby-service-ip>:8096/ and could see the Emby setup wizard. I picked the language (“English”), and created my Emby user and password.

On the “Setup Media Libraries” page, I clicked the button for “New Library”, selected the type “Music”, gave it a name “Music”, and then clicked on the folder “Add” button, selected the “/Music” directory that maps to the NFS share, and clicked “OK”. Lastly, under “Music Folder Structure”, I picked the item “Perfectly ordered into artist/album folders, with tracks directly in the album folders”, and pressed the “OK” button.

You can click on the Advanced selector at the top right of the page and then choose some other options, if desired.

On the next screens, I skipped the metadata language info (as it was fine), kept the default port mapping selection, accepted the terms of use, and clicked on the finished button.

At this point, I could click on the “Manual Login” and log in with the credentials I set up. Under the settings (gear at top right of screen), I did some more settings.

Under “Network”, I set the “LAN Networks” to the CIDR for my local network. Under “DNLA”, I checked the “Enable DNLA Server” box and chose my user under the “Default User” entry. Under “Plugins”, I clicked the “Catalog” button, and under General, installed the Sonos plugin.

With these changes, I clicked on the “Dashboard” button, clicked on the power button icon at the top, and selected to restart the Emby server to apply all the changes.

 

Partial success…

As-is, I can now access the URL (port 8096) from my web browser on my Mac or phone, and select and play music. However, The “Play On” menu (square box at the top right of each page), only has the selection of the web browser I’m using. I cannot see my Sonos speaker, receiver that has DLNA support, or any other DNLA devices.

I found out that the issue is with how DLNA works. From my basic understanding, the DLNA server will multicast M-SEARCH UDP packets to 239.255.255.250, using port 1900. DLNA devices will multicast NOTIFY UDP packets to the same address and port. When I was using a Docker container, the container was using Host networking, and thus was using the same IP as the host, which is on my local network.

With Kubernetes, the Emby pod is running on the pod network (10.233.0.0/18), whereas all the DLNA devices are on the local network, and these multicast packets will not traverse subnets.

I tried one solution, and that was to add to the deployment’s template spec “hostNetwork: true”. Now, the pod is on the local network and DLNA multicasts are seen and the Emby server can Play On devices like my Sonos. The problem here is that the pod has the same IP as the node that it was deployed on. This makes it hard to use, as the pod could be re-deployed on another node. Yeah, I could force it to one node, but if that node failed, I’d loose the Emby server.

 

Houston We Have Lift-Off!

I found that I can setup two interfaces on the pod, by using Multus. The plan is to create a second interface on the pod that is on the local network so that it can send/receive DLNA multicasts communicating with the DNLA devices. This requires several steps…

First, we need to install Multus. Fortunately, Multus works well with Calico, which I’m using on my network. Unfortunately, I could not use the “quick start” install methods for Multus on my arm64 Raspberry PI hardware. To get this installed, I first pulled the Multus repo:

git clone https://github.com/k8snetworkplumbingwg/multus-cni.git
cd multus-cni/deployments

I used the multus-daemonset.yml to create a daemonset that will install Multus on each node. However, the two image: lines need to be changed, as “ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot” is not for the arm64 platform. I think they have some multi-platform support, maybe with annotations, but I didn’t know how to set that up. So, I went to the Github Container Registry for Multus, clicked on the “OS/Arch” tab and then selected the image for arm64 and noted the version. In multus-daemonset.yml, I changed the image version:

diff --git a/deployments/multus-daemonset.yml b/deployments/multus-daemonset.yml
index 40fa5193..fa8bde5c 100644
--- a/deployments/multus-daemonset.yml
+++ b/deployments/multus-daemonset.yml
@@ -179,7 +179,7 @@ spec:
serviceAccountName: multus
containers:
- name: kube-multus
- image: ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot
+ image: ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot-debug@sha256:351652b583600b0d0d704269882fd2fa53395c5ce4602a76a2799960b2c06dce
command: ["/thin_entrypoint"]
args:
- "--multus-conf-file=auto"
@@ -204,7 +204,7 @@ spec:
mountPath: /tmp/multus-conf
initContainers:
- name: install-multus-binary
- image: ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot
+ image: ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot-debug@sha256:351652b583600b0d0d704269882fd2fa53395c5ce4602a76a2799960b2c06dce
command: ["/install_multus"]
args:
- "--type"

Now, I can “kubectl apply -f multus-daemonset.yml” to install the daemonset. Once done, I checked that the pods are running on each node:

kubectl get pods --all-namespaces | grep -i multus
kube-system kube-multus-ds-4btnp 1/1 Running 0 20h
kube-system kube-multus-ds-6p9vx 1/1 Running 0 20h
kube-system kube-multus-ds-mzb4b 1/1 Running 0 20h
kube-system kube-multus-ds-s7d8v 1/1 Running 0 20h
kube-system kube-multus-ds-twn6k 1/1 Running 0 20h
kube-system kube-multus-ds-vqxh8 1/1 Running 0 20h
kube-system kube-multus-ds-wwnbj 1/1 Running 0 20h

On a node, you can check that there is a /etc/cni/net.d/00-multus.conf file as the lexically first file. Now, a network attachment definition can be created (I added it to the k8s-emby.yaml file, after the namespace definition):

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
namespace: emby
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "10.11.12.0/24",
"rangeStart": "10.11.12.211",
"rangeEnd": "10.11.12.215",
"routes": [
{ "dst": "10.11.12.0/24" }
],
"gateway": "10.11.12.1"
}
}'

In the metadata, I specified the “emby” namespace, so that this is visible by the emby pod. The config section is a CNI configuration. Of note is that master attribute is the interface name for the pod’s main interface (with IP on pod network). For IPAM, I used the CIDR of my local network as the subnet, and used a range of IPs that is outside of any DHCP pool, LoadBalancer pool, and existing static IPs. I set the route destination as the local network (not a default route, so that it doesn’t interfere with pod traffic).

The final step is to modify the Emby deployment so that when the Emby pod is created, it uses the new network attachment definition and creates two interfaces. This is done as an annotation under the deployment’s template metadata (added lines in red) of the k8s-emby.yaml file:

...
apiVersion: apps/v1
kind: Deployment
metadata:
name: emby
namespace: emby
labels:
app: emby
spec:
replicas: 1
selector:
matchLabels:
app: emby
template:
metadata:
labels:
app: emby
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
...

Now, when we apply the deployment, the pod will have two interfaces. The main interface, eth0, and the additional net1 interface:

3: eth0@if40: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1480 qdisc noqueue state UP qlen 1000
link/ether be:0f:38:6b:bc:ea brd ff:ff:ff:ff:ff:ff
inet 10.233.115.96/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::bc0f:38ff:fe6b:bcea/64 scope link
valid_lft forever preferred_lft forever
4: net1@tunl0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 22:76:a6:b2:34:9f brd ff:ff:ff:ff:ff:ff
inet 10.11.12.213/24 brd 10.11.12.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::2076:a6ff:feb2:349f/64 scope link
valid_lft forever preferred_lft forever

Now, I can access the UI at the service’s public address, and when I click on the “Play On” button, I see all the DLNA devices that receive streamed music. Yay!

 

Remote/Secure Access

Everything is working locally, but I wouldn’t mind being able to play music on my phone, when I’m away from home. I started planning this and decided on a few things:

  •  
    • Use the domain name that I purchased and create subdomains for each app.
    • Use the Dynamic DNS service that I purchased to map my domain to my home router.
    • Configure the router to map HTTP and HTTPS requests to an Ingress controller, which will route the requests to the apps, based on the subdomain used.
    • Use Let’s Encrypt so that all HTTPS requests have a valid certificate (from a Certificate Authority).

Prep Work

I have the domain registration (e.g. my-domain.com) and I have created subdomains for my apps (e.g. music.my-domain.com). I know my Dynamic DNS service domain name, so I created CNAME DNS records to point the domain and all the subdomains to that DDNS name.

With my router and the Dynamic DNS service, I have configured it so that the DDNS domain name is always pointing to my router’s WAN address (which is a DHCP address from my service provider and can change).

Kubernetes Work

With the external stuff out of the way (mostly), I could focus on connecting up the Kubernetes side. I installed Traefik for an ingress controller. I like this better than NGINX, because it works well with apps that are in namespaces:

helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik traefik/traefik

This is running in the default namespace, but can be run in a specific namespace, if desired. I made sure the pod and service were running. On my router, I set port forwarding of HTTP and HTTPS requests to the IP of the Traefik service. That will cause all external requests to use the Traefik ingress for routing. Next, install the cert-manager:

helm repo add jetstack https://charts.jetstack.io --force-update
helm install \
cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.15.0 \
  --set crds.enabled=true

Obviously, you can use the latest version of cert-manager that is compatible with the Kubernetes version you are running. You’ll see pods, services, deployments, and replica sets created (and running) for the cert-manager.

I created a work area to hold manifests for the resources that will be created:

mkdir -p ~/kubernetes/traefik
cd ~/kubernetes/traefik

For the Let’s Encrypt certificates, we’ll test everything out with staging certificates, and then once that is all working, we can switch to production certificates. This is done, because there is rate-limiting on production certificates and we don’t want to have multiple failures to hit the limit and block us.

Here is the staging certificate (emby-issuer.yaml):

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: emby-issuer
namespace: emby
spec:
acme:
email: your-email-address
# We use the staging server here for testing to avoid hitting
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# if not existing, it will register a new account and stores it
name: emby-issuer-account-key
solvers:
- http01:
# The ingressClass used to create the necessary ingress routes
ingress:
class: traefik

Note that this is in the same namespace as the app, the staging Let’s Encrypt server is used, and you provide a contact email address. For the production Issuer, emby-prod-issuer.yaml, we have:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: emby-prod-issuer
namespace: emby
spec:
acme:
email: your-email-address
# We use the staging server here for testing to avoid hitting
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# if not existing, it will register a new account and stores it
name: emby-issuer-account-key
solvers:
- http01:
# The ingressClass used to create the necessary ingress routes
ingress:
class: traefik

Pretty much the same thing, only using the production Let’s Encrypt server, and a different name for the issuer. Do a “kubectl apply -f” for each of these and then do a “kubectl get issuer -A” to make sure they are ready. You can check “kubectl describe issuer -n emby emby-issuer” and under the Status section see that the staging issuer is registered and ready:

    Reason: ACMEAccountRegistered
Status: True
Type: Ready

Now, I create an Ingress for the app that will be used with the staging certificate:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: emby
namespace: emby
annotations:
cert-manager.io/issuer: "emby-issuer"
spec:
tls:
- hosts:
- music.my-domain.com
secretName: tls-emby-ingress-http
rules:
- host: music.my-domain.com
http:
paths:
- path: /emby
pathType: Prefix
backend:
service:
name: emby-service
port:
name: http

Of note is that there is an annotation that refers to the cert-manager staging issuer and the subdomain name that will be used for this Emby app is specified both as the host and in the TLS. If desired you can leave out the annotation and the TLS section and test out accessing the Emby app by using HTTP (e.g. http://music.my-domain.com/emby). That is what I did to make sure the ingress alone was OK.

This ingress will take requests to music.my-domain.com/emby/… and pass them to the service “emby-service” (running in namespace “emby”) using the port defined in the service with the name “http” (e.g. 8096).

By applying emby-ingress.yaml, you will initiate the process of creating a staging certificate. A certificate will be created, but not ready. This will trigger a certificate request and then an order. The order will create a challenge that will verify the challenge URL is reachable and then will obtain the certificate from Let’s Encrypt. Here are the get commands you can use for resources, and then you can  do describe commands for the specific resources:

kubectl get certificate -A
kubectl get certificaterequest -A
kubectl get order -A
kubectl get challenge -A

It will take some time for all this to happen, but you can “describe” the challenge and check the other resources to see when they are valid/ready/approved. Don’t worry if you see a 404 status on the challenge initially. It should clear after 30 seconds or so.

When the challenge is completed successfully, the challenge resource will be removed and there will be a new secret with the name of your certificate (e.g. tls-emby-ingress-http) in the namespace of the app. This secret would be used for the certificate that users would see when accessing your domain. Granted, it is from the staging server, so there would be a warning about the validity, but now you can repeat the process with the production certificate and then visitors would see a valid certificate.

Here is the production ingress (emby-prod-ingress.yaml) that can be used with the production issuer that was previously created:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: emby
namespace: emby
annotations:
cert-manager.io/issuer: "emby-prod-issuer"
spec:
tls:
- hosts:
- music.my-domain.com
secretName: emby-prod-cert
rules:
- host: music.my-domain.com
http:
paths:
- path: /emby
pathType: Prefix
backend:
service:
name: emby-service
port:
name: http

I used different names for the issuer and secret, so that there was no conflict with the staging ones. You can delete the staging ingress, issuer, and secret, once this is working. Here is output of a successful production certificate:

kubectl get cert -n emby
NAME READY SECRET AGE
emby-prod-cert True emby-prod-cert 122m

kubectl get certificaterequest -n emby
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
emby-prod-cert-1 True True emby-prod-issuer system:serviceaccount:cert-manager:cert-manager 122m

kubectl get order -n emby
NAME STATE AGE
emby-prod-cert-1-2302305457 valid 122m

 

Forcing HTTPS

Right now, it is possible to use both http://music.my-domain.com/emby/ and https://music.my-domain.com/emby/. I would like to redirect all HTTP requests to HTTPS. To do that, I’ll use the Traefik redirect middleware by creating this manifest (redirect2https.yaml):

# Redirect to https
apiVersion: v1
kind: Namespace
metadata:
name: secureapps
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect2https
namespace: secureapps
spec:
redirectScheme:
scheme: https

As you can see, I have the namespace “secureapps”. If desired, you can use the namespace for a single app (e.g. “emby”), if you only want the redirection to apply there. You can alternatively modify the Traefik Helm chart (do a “helm show values traefik/traefik > values.yaml” and set ports.web.redirectTo: websecure” and then update the chart) to apply to all ingresses. I have not tried that method.

Now, we update the ingress manifest to add this annotation (this is the top of the emby-ingress.yaml):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: emby
namespace: emby
annotations:
cert-manager.io/issuer: "emby-issuer"
traefik.ingress.kubernetes.io/router.middlewares: secureapps-redirect2https@kubernetescrd
spec:

The new line is in red and has the namespace-middleware (secureapps-redirect2https). If you wanted this to apply only to Emby, you could change the namespace of the Middleware to “emby” and use “emby-redirect2https”.

I deleted the ingress I was currently using, deleted the secret for the cert that was generated, and then applied the Middleware manifest and the ingress that was updated. A new cert was created and now, HTTP requests are redirected to HTTPS!

 

Removing The Remote Access Setup

Delete the issuers and ingress that you have (e.g. use kubectl delete -f emby-prod-ingress.yaml). You can then remove Traefik:

helm delete traefik

And the cert-manager:

helm delete -n cert-manager cert-manager
kubectl delete crd virtualservers.k8s.nginx.org

kubectl delete crd virtualserverroutes.k8s.nginx.org

kubectl get crd | grep cert | cut -f1 -d" " | xargs kubectl delete crd

And finally any secrets that were created for certificates:

kubectl delete -n emby secret emby-issuer-account-key emby-prod-cert tls-emby-ingress-http

 

Things To Explore

Traefik also has an IngressRoute mechanism that seems to be quite flexible and a lot of their documentation (like for Let’s Encrypt setup) uses this instead of Ingress. It may be worthwhile using that as, at first blush, it seems like a newer way of doing things.

Consider removing the /emby prefix from the path for accessing remotely. This would only apply, if Emby was the only web service being provided. If you have multiple web services, then you need to keep the prefix to discern which service to use.

Consider using one certificate for all sub-domains.

 

 

 

Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on Media Server In Kubernetes
June 12

Ad-Blocking With PI-Hole

I had Pi-Hole running on a standalone Raspberry PI, but wanted to move this to my Kubernetes cluster. Digging around, I found a useful article on how add PI-Hole to Kubernetes, which not only talked about using PI-Hole, but having redundant instances with info on keeping them in-sync. It used MetalLB, ingress, and CertManager for Let’s Encrypt certifications – something I was interested in.

There was another article, based on using Helm and having some monitoring setup. I may try someday.

 

A Few Things

First, as expected, this article had an older version of pi-hole (2022.12.1). I tried the latest version (at this time 2024.05.0), but the pods were stuck in crash loops. What I found out, was that for liveness/readiness, the YAML specified to do an HTTP get at the root of the Lighttp web server. When using the 2023.02.1 pihole image it worked, but with 2023.02.2 it failed.

Trying curl http://127.0.0.1/ inside the pod showed a 403 Forbidden error. If I tried to access http://127.0.0.1/admin, I’d get a 301 Moved Permanently with a ‘/admin/’ path. If I did http://127.0.0.1/admin/, I’d get a 302 Found response with path ‘login.php’. When I did http://127.0.0.1/admin/login.php, I’d get a 200 OK result with content.

So, I changed the liveness and health probe configuration to add a path field with ‘/admin/login.php’ and then the pods would come up successfully.

Second, For the PI-Hole admin web pages, I chose to use a network type of LoadBalancer (instead of ClusterIP and then setting up an ingress IP). Accessing locally is fine, as I just use the IP assigned by the load balancer. The article talks about setting up a certificate using Let’s Encrypt to be able to access remotely.

I already have a domain name, and I’m using Dynamic DNS to redirect that domain to my router’s WAN IP. But, I’m currently port forwarding external HTTP/HTTPS traffic to my standalone Raspberry PI for a music server that uses Let’s Encrypt for certificates.

For now, I think I’ll just access my PI-Hole admin page locally. I will, however, have to figure out how to setup Let’s Encrypt, once I move my music server and other web apps to the Kubernetes cluster, so it will be useful to keep this info in mind.

 

Setting Up PI-Hole

I’m doing the same thing as the article, running three replicas of the PI-Hole pods, and I altered the liveness/readiness check. Here is my manifest.yaml in pieces:

apiVersion: v1
kind: Namespace
metadata:
name: pihole
---
apiVersion: v1
kind: ConfigMap
metadata:
name: pihole-configmap
namespace: pihole
data:
TZ: "America/New_York"
PIHOLE_DNS_: "208.67.220.220;208.67.222.222"


This sets up a namespace for PI-Hole, defines the timezone I'm using, and the upstream DNS servers that I wanted to use (OpenDNS). You can customize, as desired.
---
apiVersion: v1
kind: Secret
metadata:
name: pihole-password
namespace: pihole
type: Opaque
data:
WEBPASSWORD: <PUT_BASE64_PASSWORD_HERE> # Base64 encoded

This is the password that will be used when logging into the PI-Hole admin page. You should encode this using “echo -n ‘MY PASSWORD’ | base64” and place the encoded string in the WEBPASSWORD attribute.

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pihole
namespace: pihole
spec:
selector:
matchLabels:
app: pihole
serviceName: pihole
replicas: 3
template:
metadata:
labels:
app: pihole
spec:
containers:
- name: pihole
image: pihole/pihole:2024.05.0
envFrom:
- configMapRef:
name: pihole-configmap
- secretRef:
name: pihole-password
ports:
- name: svc-80-tcp-web
containerPort: 80
protocol: TCP
- name: svc-53-udp-dns
containerPort: 53
protocol: UDP
- name: svc-53-tcp-dns
containerPort: 53
protocol: TCP
livenessProbe:
httpGet:
port: svc-80-tcp-web
path: /admin/login.php
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
port: svc-80-tcp-web
path: /admin/login.php
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 10
volumeMounts:
- name: pihole-etc-pihole
mountPath: /etc/pihole
- name: pihole-etc-dnsmasq
mountPath: /etc/dnsmasq.d
volumeClaimTemplates:
- metadata:
name: pihole-etc-pihole
namespace: pihole
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 3Gi
- metadata:
name: pihole-etc-dnsmasq
namespace: pihole
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 3Gi

This is the stateful set that will create three replicas of the PI-Hole pods. I’m using the latest version at this time (2024.05.0), have modified the liveness/readiness checks as mentioned above, and am using PVs (longhorn) for storing configuration.

---
apiVersion: v1
kind: Service
metadata:
name: pihole
namespace: pihole
labels:
app: pihole
spec:
clusterIP: None
selector:
app: pihole
---
kind: Service
apiVersion: v1
metadata:
name: pihole-web-svc
namespace: pihole
spec:
selector:
app: pihole
statefulset.kubernetes.io/pod-name: pihole-0
type: LoadBalancer
ports:
- name: svc-80-tcp-web
port: 80
targetPort: 80
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: pihole-dns-udp-svc
namespace: pihole
annotations:
metallb.universe.tf/allow-shared-ip: "pihole"
spec:
selector:
app: pihole
type: LoadBalancer
ports:
- name: svc-53-udp-dns
port: 53
targetPort: 53
protocol: UDP
---
kind: Service
apiVersion: v1
metadata:
name: pihole-dns-tcp-svc
namespace: pihole
annotations:
metallb.universe.tf/allow-shared-ip: "pihole"
spec:
selector:
app: pihole
type: LoadBalancer
ports:
- name: svc-53-tcp-dns
port: 53
targetPort: 53
protocol: TCP

These are the services for the UI and for DNS. Of note, we are using the same laod balancer IP for the TCP and UDP DNS services. I used load balancer for the web UI as well (instead of using ClusterIP and setting up an ingress – maybe that will bite me later).

With this manifest, you can “kubectl apply -f manifest.yaml” and then look for all three of the pods to start up. You should be able to do nslookup/dig commands using the IP of the service as the server to verify that DNS is working, and you can use the IP for the pihole-web-svc service with a path of /admin/ (e.g. http://10.11.12.203/admin/). Use the password you defined in the manifest, to log in and see operation of the Ad Blocker.

Keeping The PI-Hole Pods In Sync

As mentioned in the article, we have three PI-Hole pods (one primary, two secondary), but need to keep the database in sync. To do this, Orbital Sync is used to backup the primary pod’s database, and then restore it to the secondary pods’ databases. Here is the orbital-sync.yaml manifest:

kind: ConfigMap
metadata:
name: orbital-sync-config
namespace: pihole
data:
PRIMARY_HOST_BASE_URL: “http://pihole-0.pihole.pihole.svc.cluster.local”
SECONDARY_HOST_1_BASE_URL: “http://pihole-1.pihole.pihole.svc.cluster.local”
SECONDARY_HOST_2_BASE_URL: “http://pihole-2.pihole.pihole.svc.cluster.local”
INTERVAL_MINUTES: “1”

apiVersion: apps/v1
kind: Deployment
metadata:
name: orbital-sync
namespace: pihole
spec:
selector:
matchLabels:
app: orbital-sync
template:
metadata:
labels:
app: orbital-sync
spec:
containers:
– name: orbital-sync
image: mattwebbio/orbital-sync:latest
envFrom:
– configMapRef:
name: orbital-sync-config
env:
– name: “PRIMARY_HOST_PASSWORD”
valueFrom:
secretKeyRef:
name: pihole-password
key: WEBPASSWORD
– name: “SECONDARY_HOST_1_PASSWORD”
valueFrom:
secretKeyRef:
name: pihole-password
key: WEBPASSWORD
– name: “SECONDARY_HOST_2_PASSWORD”
valueFrom:
secretKeyRef:
name: pihole-password
key: WEBPASSWORD

It runs every minute, and uses the secret that was created with the password to access PI-Hole. You can look at the orbital sync pod log to see that it is backing up and restoring the database among the PI-Holes.

 

Finishing Touches

Under the UI’s local DNS entries section, I manually entered the hostname (with a .home suffix) and IP address for each of my devices on the local network, so that I can access them by :”name.home”.

I did not setup DHCP on PI-Hole, as I used my router’s DHCP configuration.

To use the PI-Hole as the DNS server for all systems in your network, you can specify the IP of the PI-Hole on each host as the only DNS server. If you specify more than one DNS server, based on your OS, it may use the other server(s) at times and bypass the ad-blocking.

For me, I have all my hosts using the router as the primary DNS server. The router is configured to use the PI-Hole as the primary server, and then a public server as the secondary server. Normally, requests would always go to the Pi-Hole, unless for some reason it was down. This was advantageous for two reasons. First, when I had my standalone PI-Hole, if it crashed, there still was DNS resolution. Second, it made it easy to switch from the standalone PI-Hole to the Kubernetes one, by just changing the router configuration.

The only odd thing with this setup, is that when I use my laptop away from the network, my router’s IP is (obviously) not available. I’ve been getting around this, by using the “Location” feature of the MacOS, to setup the “Home” location to use my router’s IP for DNS, and to use a public DNS server for the “Roaming” location.

I guess I could setup so that the ports used for DNS on my domain name (which points to my router using Dynamic DNS), would port forward to the PI-Hole IP, but I didn’t want to expose that to the Internet.

 

 

Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on Ad-Blocking With PI-Hole
June 3

Kubespray Add-Ons

In Part IV of the PI cluster series, I mention how to setup Kubespray to create a cluster. You can look there for how to setup your inventory, and the basic configuration settings for Kubespray. In that series, I mention about how to add more features, after the cluster is up. Some are pretty simple, and some require some manual steps to get everything set up.

However, you can also have Kubespray install some “add-on” components, as part of the cluster bring-up. In many cases, this makes the process more automated, and “easier”, but it does have some limitations.

First, you will be using the version and configuration that is defined in Kubespray’s Ansible templates and roles.  Granted, you can always customize Kubespray, with the caveat of having to keep your changes up to date with upstream.

Second, removing the feature on a running cluster can be more difficult. You’ll have to manually delete all the resources (e.g. daemonsets, deployments, etc.), of which, some may be hard to identify (CRDs, RoleBindings, secrets, etc). Looking in the Kubespray templates may provide some insight into the resources that were created.

You may be able to find manifests for the feature and version from the feature’s repo, and pull them and use “kubectl delete” on the manifests to remove the feature. Just note, that there may be some differences, between what is in the repo manifests for a version, and what are in the manifests that Kubespray used. I haven’t tried it, but if there is a Helm based version of the feature that matches what Kubespray installed, you might be able to “helm install” the already installed feature, and then “helm delete”?

 

Kube VIP (Virtual IP and Service Load Balancing)

To add Kube-VIP as part of the Kubespray add-on, I did these steps, before creating the cluster.

First, I modified the inventory, so that etcd would run on each of my control-plane nodes (versus a mix of control-plane and worker nodes).

Second, in inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml, I enabled strict ARP, used IPVS (instead of iptables) for kube-proxy, and excluded my local network from kube-proxy (so that kube-proxy would not clear entries that were created by IPVS):

kube_proxy_strict_arp: true
kube_proxy_mode: ipvs
kube_proxy_exclude_cidrs: ["CIDR_FOR_MY_LOCAL_NETWORK",]

Third, I enabled kube-vip in inventory/mycluster/group_vars/k8s_cluster/addons.yml. I turned on ARP (vs BGP), and setup to do VIP for control plane and specified the API to use. I also selected to do load balancing of that VIP. I did not enable load-balancing for services, but that is an option too:

kube_vip_enabled: true
kube_vip_arp_enabled: true
kube_vip_controlplane_enabled: true
kube_vip_address: VIP_ON_MY_NETWORK
loadbalancer_apiserver:
address: "{{ kube_vip_address }}"
port: 6443
kube_vip_lb_enable: true

# kube_vip_services_enabled: false
# kube_vip_enableServicesElection: true

I had tried this out, but found that the kube-vip container was showing connection refused and permission problems, so leader election was not working for the virtual IP chosen.

I finally found a bug report on the issue when using Kubernetes 1.29 with kube-vip. Essentially, when the first control plane node is starting up, the admin.conf file used for kubectl commands, does not have the permissions needed for kube-vip at that point in the process. The kube-vip team needs to create their own config file for kubectl. In the meantime, the bug report is trying a work-around fix in Kubespray, by switching to the super-admin.conf file, which will have the needed permissions at that point in time. However, the patch they have does not work. I did more hacking to it, and have this change, which works:

diff --git a/roles/kubernetes/node/tasks/loadbalancer/kube-vip.yml b/roles/kubernetes/node/tasks/loadbalancer/kube-vip.yml
index f7b04a624..b5acdac8c 100644
--- a/roles/kubernetes/node/tasks/loadbalancer/kube-vip.yml
+++ b/roles/kubernetes/node/tasks/loadbalancer/kube-vip.yml
@@ -6,6 +6,10 @@
- kube_proxy_mode == 'ipvs' and not kube_proxy_strict_arp
- kube_vip_arp_enabled

+- name: Kube-vip | Check if first control plane
+ set_fact:
+ is_first_control_plane: "{{ inventory_hostname == groups['kube_control_plane'] | first }}"
+
- name: Kube-vip | Write static pod
template:
src: manifests/kube-vip.manifest.j2
diff --git a/roles/kubernetes/node/templates/manifests/kube-vip.manifest.j2 b/roles/kubernetes/node/templates/manifests/kube-vip.manifest.j2
index 11a971e93..7b59bca4c 100644
--- a/roles/kubernetes/node/templates/manifests/kube-vip.manifest.j2
+++ b/roles/kubernetes/node/templates/manifests/kube-vip.manifest.j2
@@ -119,6 +119,6 @@ spec:
hostNetwork: true
volumes:
- hostPath:
- path: /etc/kubernetes/admin.conf
+ path: /etc/kubernetes/{% if is_first_control_plane %}super-{% endif %}admin.conf
name: kubeconfig
status: {}

 

UPDATE: There is a fix that is in progress, which is a streamlined version of my change. Once that is merged, no patch will be needed.

With this change to Kubespray, I did a cluster create:

cd ~/workspace/kubernetes/picluster
poetry shell
cd ../kubespray
ansible-playbook -i ../picluster/inventory/mycluster/hosts.yaml -u ${USER} -b -vvv --private-key=~/.ssh/id_ed25519 cluster.yml

Everything was up and running, but kubectl commands were failing on my Mac, because the ~/.kube/config file uses the FQDN https://lb-apiserver.kubernetes.local:6443 for the server, and there is no DNS info on my Mac for this host name (it does work on the nodes, however). The simple fix was to repace the FQDN with the IP address selected for the VIP.

Now, all requests to that IP are redirected to the node that is currently running the API server. If the node is not available, IPVS will redirect to another control plane node.

MetalLB Load Balancer

Instead of setting this up after the cluster was created, you can opt to let Kubespray do this as well. In the inventory/mycluster/group_vars/k8s_cluster/addons.yml, I did these changes:

metallb_enabled: true
metallb_speaker_enabled: "{{ metallb_enabled }}"
metallb_namespace: "metallb-system"


metallb_protocol: "layer2"
metallb_config:

 address_pools:
 primary:
 ip_range:
- FIRST_IP_IN_RANGE-LAST_IP_IN_RANGE
 auto_assign: true
layer2:
- primary

Besides enabling the feature, I made sure that it was using layer two vs layer three, and under the config, setup an address pool with the range of IPs on my local network that I wanted to use for load balanced IPs. You can specify as a CIDR, if desired.

Now, when the cluster is created with Kubespray, MetalLB will be set up and you can change pods/services to use the networking type “LoadBalancer” and an IP from the pool will be assigned.

As mentioned in the disclaimer above, with the version of Kubespray I have, it installs MetalLB 0.13.9. I could have overridden the ‘metallb_version’ to a newer version, like ‘v0.14.5’, but the templates for MetalLB in Kubespray are using the older v0.11.0 kubebuilder image in several places. To get the same versioning as used when installing MetalLB via Helm, I would have to modify the templates to specify v0.14.0. I did see other configuration differences with the CRDs used in the Helm version, like setting the tls_min_version argument and not setting some priority nor priorityClassName configurations.

NGINX Ingress

This one is pretty easy to enable, by changing this setting in inventory/mycluster/group_vars/k8s_cluster/addons.yml:

ingress_nginx_enabled: true

When the cluster comes up, there will be an ingress daemonset, which created ingress controller pods on each node, and a NGINX ingress service with an IP from the MetalLB address pool.

There are example YAML files in the MetalLB/NGINX Ingress post, that will allow you to create pods and services, and an ingress resource that allows access via path prefixes.

Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on Kubespray Add-Ons
June 1

High Availability?

OK, so I have a cluster with three control plane nodes and four worker nodes (currently). However, if I shutdown the control plane node that is hosting the API server, I lose API access. 🙁

I’ve been digging around and it looks like kube-vip would be a good solution, as it allows me to create a virtual IP for the API server, and then does load balancing and leader election between the control plane nodes so that the failure of the node providing the API can switch to another control plane node. In addition, kube-vip can do load balancing between services (I’m not sure if that makes metalLB redundant).

Before installing kube-vip, I needed to change the cluster configuration. I changed the inventory, so that etcd is running ONLY on the control-plane nodes (and not a mix of control plane and worker nodes).

Next, I made these changes to inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml:

kube_proxy_mode: ipvs
kube_proxy_strict_arp: true
kube_proxy_exclude_cidrs: ["CIDR_OF_LOCAL_NETWORK",]

This had kube-proxy also using IPVS (versus iptables), and running in strict ARP mode (needed for kube-vip). Lastly, to prevent kube-proxy from clearing IPVS settings made by kube-vip, the local network IPs must be excluded. With those changes, I re-created a cluster, and was ready to install kube-vip…

There was Medium article by Chris Kirby to use a Helm install of kube-vip for HA. It used an older version of kube-vip (0.6.4) and used value.yaml settings for K3s. I added the Helm repo for kube-vip, and pulled the values.yaml file to be able to customize it:

mkdir ~/workspace/kubernetes/kube-vip
cd ~/workspace/kubernetes/kube-vip
helm repo add kube-vip https://kube-vip.github.io/helm-charts
helm repo update

wget https://raw.githubusercontent.com/kube-vip/helm-charts/main/charts/kube-vip/values.yaml

Here are the changes I made to the values.yaml, saving it as values-revised.yaml:

6c6
< pullPolicy: IfNotPresent
---
> pullPolicy: Always
8c8
< # tag: "v0.7.0"
---
> tag: "v0.8.0"
11c11
< address: ""
---
> address: "VIP_ON_LOCAL_NETWORK"
20c20
< cp_enable: "false"
---
> cp_enable: "true"
22,23c22,24
< svc_election: "false"
< vip_leaderelection: "false"
---
> svc_election: "true"
> vip_leaderelection: "true"
> vip_leaseduration: "5"
61c62
< name: ""
---
> name: "kube-vip"
86c87,88
< nodeSelector: {}
---
> nodeSelector:
> node-role.kubernetes.io/control-plane: ""
91a94,97
> - effect: NoExecute
> key: node-role.kubernetes.io/control-plane
> operator: Exists
>
93,101c99,104
< # nodeAffinity:
< # requiredDuringSchedulingIgnoredDuringExecution:
< # nodeSelectorTerms:
< # - matchExpressions:
< # - key: node-role.kubernetes.io/master
< # operator: Exists
< # - matchExpressions:
< # - key: node-role.kubernetes.io/control-plane
< # operator: Exists
---
> nodeAffinity:
> requiredDuringSchedulingIgnoredDuringExecution:
> nodeSelectorTerms:
> - matchExpressions:
> - key: node-role.kubernetes.io/control-plane
> operator: Exists

Besides using a newer kube-vip version, this enabled load balancing for control plane nodes and services, selects nodes that have the control-plane attribute (but not a value, like the article), and sets the node affinity.

With this custom values file, I could do the install:

helm install my-kube-vip kube-vip/kube-vip -n kube-system -f values-revised.yaml

With this, all the kube-vip pods were up, and the daemonset showed three desired, current, and ready. However, when I changed the server IP to my VIP in ~/.kube/config and tried kubectl commands, they failed saying that there was a x509 certificate for each of the control plane nodes, and a cluster IP, but not for the VIP I’m using.

This can be fixed by re-generating the certificates on every control plane node:

sudo su
cd
kubectl -n kube-system get configmap kubeadm-config -o jsonpath='{.data.ClusterConfiguration}' --insecure-skip-tls-verify > kubeadm.yaml

mv /etc/kubernetes/pki/apiserver.{crt,key} ~
kubeadm init phase certs apiserver --config kubeadm.yaml

In the output, I saw the IPs of the control plane nodes AND the VIP I defined. Next, the kube-apiserver container needs to be stopped and removed, so that a new one is started.

crictl ps | grep kube-apiserver
crictl stop <ID-of-apiserver>
crictl rm <ID-of-apiserver>

Now, kubectl commands using the VIP will be redirected to the control plane node running the API server, and if that node is unavailable, the requests will be redirected to another control plane node. You can see that by doing arping of the VIP and, when the leadership changes, the MAC displayed will change.

Kind of involved, but this works!

I did have some problems, when playing with HA for the API. I had rebooted the control plane node that was actively providing the API. Kube-vip did its job, and IPVS redirected API requests to another control plane node that was “elected” as the new leader. All good so far.

However, when that control plane node came back up, it would appear in the “kubectl get node” output, but showed as “NotReady”, and it never seemed to become ready. It appeared that the network was not ready, and the calico-node pod was showing an error. I played around a bit, but couldn’t seem to clear the error.

One thing I did was a Kubespray upgrade-cluster.yml with the –limit argument, specifying the node and one of the other control plane nodes (so that control plane “facts” were specified). The kube-vip pod for the node was still failing with a connection refused error. On the node, I stopped/removed the kube-apiserver container and then kube-vip container, and then kube-vip no longer had any errors.

The only thing was that ipvsadm on the node, did not show a load balancing entry for the VIP, and the other two control plane nodes only had their IPs in the load balancing entry for the VIP. I didn’t try rebooting another control-plane node.

Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on High Availability?
May 14

OpenVPN

After looking at several posts on OpenVPN, I decided to go with this one, which uses Helm, works with Kubernetes (versus just Docker), supports ARM64 processors, and had some easy configuration built-in. It hasn’t been updated in over a year, so I forked the repo and made some changes (see details below).

Here are the steps to set this up…

Pull Repo

To start, pull my version of the k4kratik k8s-openvpn repository:

cd ~/workspace/kubernetes/
git clone https://github.com/pmichali/k8s-openvpn.git
cd k8s-openvpn

 

Build

When working on a Mac, you can install Docker Desktop to run docker commands from the command line. You can alter the Dockerfile.aarch64 to use a newer Alpine image (and hence a newer OpenVPN image). Build a local copy of the openvpn image:

cd build/
docker build -f Dockerfile.aarch64 -t ${YOUR_DOCKER_ID}/openvpn:latest .

Setup a Docker account at hub.docker.com and create an access token so that you can log in. Push your image up to DockerHub:

docker login
docker push ${YOUR_DOCKER_ID}/openvpn:latest
cd ../deploy/openvpn


Customize

In k8s-openvpn/deploy/openvpn there is a values.yaml file, copy it to ${USER}-values.yaml and customize for your needs. In my case, I did the following changes:

  • Under ‘image’ ‘repository’, set the username to YOUR_DOCKER_ID, so that it loads your image.
  • Under the ‘service’ section, used a custom ‘externalPort’ number.
  • Under the service section, set a ‘loadBalancerIP’ address that is in my local network.
  • Set ‘DEFAULT_ROUTE_ENABLED: false’ so not using pod’s host route. Instead, will provide route later.
  • Decided to limit the number of clients by un-commenting ‘max-clients 5’
  • Under ‘serverConf’ section:
    • Added a route to my local network using ‘push “route <NETWORK>/<PREFIX>”‘.
    • Added my local DNS server with ‘push “dhcp-option DNS <IP>”‘.
    • Added OpenDNS as a backup DNS with ‘push “dhcp-option DNS 208.67.222.222″‘.

You can also change server and client configuration settings in deploy/openvpn/templates/config-openvpn.yaml, if desired.

 

Deploy

With the desired changes, use helm to deploy OpenVPN:

helm upgrade --install openvpn . -n k8s-openvpn -f ${USER}-values.yaml --create-namespace

Check that the pods, services, deployment, replicas are all up:

kubectl get all -n k8s-openvpn

This will take quite some time (15+ minutes), as it builds all the certificates and keys for the server. Once running, you can log into the pod and check the server config settings in /etc/openvpn/openvpn.conf.

 

Create Users

With the server running, you can create client configuration files:

cd ../../manage
bash create_user.sh NAME [DOMAIN-NAME]

Once the client config is created, the config file can be imported into your OpenVPN client and you can test connecting. I use the OpenVPN client, which is available on several platforms.

There are two options when creating the client config. With just a (arbitrary) name for the device, it will create a config file (NAME.ovpn) where the client OpenVPN will connect to the OpenVPN server on the local network. In my case, that is the IP address that I specified in the customized values.yaml file with the ‘loadbalancerIP’ setting.

For example, if you set loadbalancerIP to 10.10.10.200 and ‘externalIP’ to 6666, the client will try to connect to 10.10.10.200:6666. Obviously, you can do that only from your local network. To use the, when out at Wi-Fi hot-spots, you can use the next option.

If you also add a domain name argument, then the OpenVPN client will try to connect to a server at that domain. You can purchase a domain name that maps the domain to your home router’s WAN IP address and use a service, like DynDNS to keep the IP updated for the domain (typically you get an IP from your ISP via DHCP and that can change over time). On your router, you can port forward from the ‘externalPort’ specified in the customized values.yaml to that same port on OpenVPN server, which is at the IP specified by ‘loadbalancerIP’.

For example, with loadBalancerIP set to 10.10.10.200 and ‘externalPort’ set to 6666, and a domain mydomain.com, the client would try to connect to mydomain.com:6666, which could be done from anywhere. You would need to make sure the dynamic IP for mydomain.com is pointing to your WAN IP address of your router, and do port forwarding for port 6666 to 10.10.10.200 port 6666.

 

Ciphers/Digests

When I upgraded the Apline OS for the VPN container, which in turn selects the version of OpenVPN (2.6.10 at the time of this posting), I wanted to make sure that the configuration settings for ciphers/digests were current.

In deploy/openvpn/templates/config-openvpn.yaml there is a section called openvpn.conf, which has the server configuration settings. Here are the pertinent entries in that section:

 auth SHA512
...
tls-version-min 1.2
...
tls-cipher TLS-ECDHE-ECDSA-WITH-AES-256-GCM-SHA384:TLS-ECDHE-RSA-WITH-AES-256-GCM-SHA384:TLS-DHE-RSA-WITH-AES-256-GCM-SHA384:TLS-ECDHE-ECDSA-WITH-CHACHA20-POLY1305-SHA256

With the running OpenVPN pod, you can exec into the pod and run these commands to see the ciphers that are available. For TLS ciphers, you can use this command to see the ciphers for TLS 1.3 and newer,  and TLS 1.2 and older:

/usr/sbin/openvpn --show-tls

In my case, as I was supporting TLS 1.2 as a minimum, the existing set of ciphers were in the 1.2 list, so I left it alone. Likewise the following command can show the digests available:

/usr/sbin/openvpn --show-digests

Again, I saw SHA512 in the list, so I left this alone. Lastly, in the values.yaml file where you can customize the ‘cipher’ clause, it now has:

cipher: AES-256-CBC

Prevoiously, it have the value ‘AES-256-GCM’, however, this is not used, when using TLS authentication. Also, I did change the protocol from TCP to UDP, which, as I understand, is more robust.

 

Details of Modifications Made

build/Dockerfile.aarch64

  • Using newer alpine image (based on edge tag 20240329)
  • Updated repo added, to use the newer test repo location – main and community already exist.


deploy/openvpn/templates/config-openvpn.yaml

  • Removed client config settings that were generating warning log messages with opt-verify set.
  • Setting auth to sha512 on client and server.
  • Disabled allowing compression on server and used of compression (security risk).
  • Added settings that were on client to server for mute, user, group, etc.
  • Set opt-verify for testing, but then commented out, as it is deprecated.
  • Specifying TLS min 1.2 on server.

deploy/openvpn/templates/openvpn-deployment.yaml

  • Turned off node affinity for lifecyle=ondemand. Does not exist on my bare metal cluster.
  • Newer busybox version 1.35 for init container.

deploy/openvpn/values.yaml

  • Using my docker hub repo image for openvpn.
  • Altered ports used for loadbalancer service (arbitrary) and fixed IP.
  • Using Longhorn for storage class.
  • Using different client network (arbitrary).
  • Using udp protocol.
  • Changed K8s pod and service subnets to match what I use (arbitrary).
  • Set to redirect all traffic through gateway.
  • Using AES-256-CBC as default cipher.
  • Pushed route for DNS servers I wanted.

manage/create_user.sh

  • Allow to pass domain name vs using published service IP.
  • Fixed namespace.
  • Fixed kubectl exec syntax for newer K8s.

manage/revoke_user.sh

  • Fixed incorrect usage message.
  • Fixed namespace
  • Fixed kubectl exec syntax for newer K8s.
Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on OpenVPN
May 13

Cluster Upgrade – Challenge

With my cluster running a kubespray version around 2.23.3 and kubernetes 1.28.2, I wanted to give a try at updating my cluster, as there were newer versions available. There were all sorts of problems along the way, so I’ll try to cover what I did, and what (finally) worked.

For reference, my cluster has longhorn storage, prometheus/grafana/loki, metalLB, nginx-ingress, and velero installed, as well.

But, before doing anything, I decided to move things around a bit in my directory structures, so that I didn’t have git repos inside of my ~/workspace/picluster git repo. I created a ~/workspace/kubernetes and placed several directories as peers in that area:

kubernetes
├── grafana-dashboards-kubernetes
├── ingress
├── kubespray
├── mysql
├── nginx-ingress
├── picluster
└── velero

The rest of the components remained in the picluster area:

kubernetes/picluster
├── inventory
├── longhorn
├── metallb
├── minio
├── minio-k8s
├── monitoring
└── playbooks

With this setup, I proceeded to identify what kubespray version to upgrade to, and whether or not this was a multi-version upgrade or not. I found that the latest release tag was 2.24.0, but there were many more commits since then, so I created a tag at my current version (0f243d751), checked out and created a tag at the desired version (fdf5988ea).

Next, I wanted to make sure that all the tools I’m using match what Kubespray is expecting for the commit that I’m using. There is a requirements.txt file that calls out all the versions. I used ‘poetry show’ to see what versions I had, and then used ‘poetry add COMPONENT==VERSION’ with a version to make sure that there were compatible versions. For example:

poetry add ansible==9.5.1

I copied the sample inventory area into my ~/workspace/kubernetes/picluster/inventory area and merged in my existing hosts.yaml, so that I had any customizations that were originally made in k8s-cluster.yml).

With this, I was ready to go to the kubespray directory and do the upgrade using…

cd ~/workspace/kubernetes/kubespray
ansible-playbook upgrade-cluster.yml -b -i ../picluster/inventory/mycluster/hosts.yaml -u ${USER} -v --private-key=~/.ssh/id_ed25519 -e upgrade_cluster_setup=true

Initially, I saw that the calico-node pods were stuck in a crash loop…

calico-node: error while loading shared libraries: libpcap.so.0.8: cannot open shared object file: No such file or directory

It turns out that the 2.24.0+ release of kubespray uses calico v3.72.2, which has issues on arm64 processors. The choice was to go to v3.72.0, which apparently has a memory leak, or go to v3.72.3, where the problem with the library was fixed. I decided to do the later, but when I overrode calico_version, the upgrade failed, because there is no checksum for that version.

I found out that in the kubespray area, there is a scripts directory, with a download_hash.sh script, which would read the updated calico_version in ./roles/kubespray-defaults/defaults/main/download.yml and update the roles/kubespray-defaults/defaults/main/checksums.yml file. Well, it wasn’t as easy as that, because I was using a MacBook and the grep command does not have a -P (perl) option, used in the script. So…

I copied the Dockerfile to HashMaker.Dockerfile, and trimmed it to this:

# syntax=docker/dockerfile:1

FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37

ENV LANG=C.UTF-8 \
DEBIAN_FRONTEND=noninteractive \
PYTHONDONTWRITEBYTECODE=1

WORKDIR /kubespray

# hadolint ignore=DL3008
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt-get update -q \
&& apt-get install -yq --no-install-recommends \
curl \
python3 \
python3-pip \
sshpass \
vim \
openssh-client \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/log/*

RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
pip install --no-compile --no-cache-dir -r requirements.txt \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;

SHELL ["/bin/bash", "-o", "pipefail", "-c"]

COPY scripts ./scripts

I copied scripts/download_shas.sh to scripts/download_shas_pcm.sh and made these changes (as inside the container there is no git repo:

9c9
< checksums_file="$(git rev-parse --show-toplevel)/roles/kubespray-defaults/defaults/main/checksums.yml"
---
> checksums_file="./roles/kubespray-defaults/defaults/main/checksums.yml"
11c11
< default_file="$(git rev-parse --show-toplevel)/roles/kubespray-defaults/defaults/main/main.yml"
---
> default_file="./roles/kubespray-defaults/defaults/main/main.yml"

With these changes, I did the following to build and run the container, where I could run the scripts/download_hash_pcm.sh script to update the checksum.yml file with the needed checksums…

docker buildx build --platform linux/arm64 -f HashMaker.Dockerfile -t hashmaker:latest .

docker run --rm -it --mount type=bind,source="$(pwd)"/roles,dst=/kubespray/roles --mount type=bind,source="${HOME}"/.ssh/id_ed25519,dst=/root/.ssh/id_ed25519 hashmaker:latest bash
./scripts/download_shas_pcm.sh
exit

(Yeah, I could have invoked the script instead of running bash and then invoking the script inside the container).

With this one would think that we are ready to do the upgrade. Well, I tried, but I hit some other issues…

  • Some nodes were updated to 1.29.3 kubernetes, but some were still at 1.28.2
  • The prometheus/grafana pods were in a crash loop, complaining that there were multiple default datasources.
  • Longhorn was older 1.5.3, and I figured it would be simple to helm upgrade to 1.6.1 – it wasn’t

Someone on Slack said that I need to do the kubespray upgrade with the “-c upgrade_cluster_setup=true” added. I did that, but it did not work and I still have three nodes with 1.29.3 and four with 1.28.2.

I found the problem with the versions. On the four older nodes, at some point kubeadm and/or kubelet were installed (as Ubuntu package). As a result, there was the newer /usr/local/bin/kubelet (v1.29.3), and the package installed /usr/bin/kublet (v1.28.2). For systemd, in addition to the /etc/systemd/system/kubelet.service, which used the /usr/local./bin/kubelet in ExecStart, there was a kubelet.service.d directory with 10-kubeadm.conf file that used /usr/bin/kubelet in ExecStart. This one seemed to take precedence.

To resolve, I removed the Ubuntu kubeadm package, which depended on kubelet, and I removed the kubelet.service.d directory and reloaded systemd. My only guess is that at one point I tried installing kubeadm. Now, upgrades will show all nodes using the newer 1.29.3 kubernetes.

I got into real trouble with this one. I tried deleting pods, removing replicasets that were no longer in use, and then tried to helm upgrade kube-prometheus-stack. That caused even more problems, as the upgrade failed and now I had a whole bunch of failing pods and replicasets not ready. The Prometheus pods were complaining about multiple attachments to the same PV (I was using Longhorn storage). I couldn’t clear the errors and could remove PVCs. I’m not sure if the problem was that I didn’t use all the arguments that I used, when I initially installed Prometheus.

I tried updating Longhorn (pulling the 1.6.1 values.yaml, changing policy from Delete to Retain and type from ClusterIP to NodePort, and then helm update with the modified values.yaml), and that was a mess too. Crash loops, and replicasets not working.

I ended up deleting the cluster entirely. I was concerned that maybe there was an issue with upgrading in general, so I installed the older kubespray/kubernetes cluster, without installing any other components (Longhorn, Prometheus), and did an upgrade. Everything worked fine.

I need to retry this, maybe with the upgrade of Prometheus using the same args as install did. I’m also worried about the multiple attachment issue with the PV.

In the meantime, I wanted to trying updating Longhorn…

With the original, Longhorn was at 1.5.3, and 1.6.1 is available. I had tried a helm upgrade (after I had upgraded the cluster), and had all sorts of problems. So, I created a new cluster, with the latest Kubernetes, made sure everything was up, and then helm installed 1.5.3, using the modified values.yaml I had with Retain policy and NodePort:

helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --version 1.5.3 --values values-1.5.3.yaml

I then did a helm upgrade to 1.6.1…

helm upgrade longhorn longhorn/longhorn --namespace longhorn-system --version 1.6.1

There were some pods in crash loops, and items not ready. I deleted the older replicasets. It looked like the deployment had annotation for 1.6.1, but was still calling out an image of 1.5.3. Looking at Longhorn notes, I saw that I could use kubectl to upgrade, and even knowing that I did use Helm install/upgrade before, I decided to try it.

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.1/deploy/longhorn.yaml

There were a bunch of warnings when running the command, but all pods came up and the deployment showed 1.6.1 for the image.

I’m not sure if there was something wrong with doing the helm update, if it was because I was customizing the values.yaml files, or if it was because I was using NodePort. With the kubectl apply, the type was set to clusterIP.

I’ve got more research to do here to isolate this issue.

I tested installing a 1.28.2 cluster, and then upgraded to 1.29.3 (doing just control plane nodes and etcd node first, and then all the worker nodes). Every pod was up and running, daemonsets/replicasets/deployments were all working, and things were looking pretty good.

There were some pre-upgrade replicatesets that were present (no needed/available instances), so I deleted them. I did a snapshot and backup of a Longhorn volume and that worked as well. I do see two problems so far.

First, under Grafana, the data sources were gone. I could not modify the Loki instance (as built-in), but I created another one. The original was giving connection refused errors. I think that the IP it uses, is the old one. There also was no Prometheus data source. I created one, and used the cluster IP and it works as well.

Second, I tried to do a backup of the Kubernetes cluster using Velero, and it failed. I tried viewing the log, but there was none. When checking ‘velero backup-location get’, it shows that the backup location is not available. It seems like various components are using older IPs/ports?

PROBLEM FOUND… It appears that when an upgrade occurs and the coredns version has changed, a new deployment, replicaset, service, and pods are created with the new version AND they get a new nameserver IP (10.133.0.10). However, the existing pods (and new ones created) are still referring to the old nameserver IP (default is 10.133.0.3). There is a service for that old nameserver IP, but it is not resolving addresses. If you do nslookup and specify the new nameserver IP, it will work, but that doens’t help everything that is running or new pods created, which are using the old nameserver IP.

WORKAROUND: If an install (cluster.yml) is done again, using the exact same settings, the first DNS service becomes active again. One can then delete the newly created service, and the unused replicasets. I tried repeating the upgrade, but that did not resolve the issue.

There does appear to be a download of the new coredns and restart of the systemd-resolved service. I don’t know if there is some mechanism to switch pods to use the new IP or if somehow the new service should have replaced the original and use the same IP.

After messing with things over a few weeks I found out quite a bit of things…

CoreDNS: I see that with the newer Kubespray master branch versions, they now have checksums for coredns 3.72.3. AS a result, I don’t need to go through the contortions of creating my own branch of Kubespray and building the checksums or coredns 3.72.3. I just picked a newer commit of Kubespray (not the current tagged version, as it still did not have the checksums for coredns 3.72.3.

Upgrading with CoreDNS changes: I found out that with the newer Ubuntu versions the kernels actually have the “dummy” kernel module. I see it in the current 6.5.0-1015-raspi kernel, and I think it was in 1013 and 1014. The implication of this is that, I was unable, in the past, to enable node local DNS in Kubespray, because this module was needed. After updating the OS on my nodes to have this newer kernel, I could then run Kubespray installs and upgrades with ‘enable_nodelocaldns’ setting and now upgrades had a working DNS, even when the version of coredns changed. There were some replicasets that remained and were not active, but the upgrades are working.

Scheduling Disabled: I was seeing several issues when doing upgrades. In one case, I found that a worker node status that was “Ready”, but had “SchedulingDisabled” indicated. I did a “kubectl uncordon NODENAME” and that enabled scheduling. Not sure why it was not completely upgraded.

Upgrading single node: I found that with Kubespray, you can use the command line argument on upgrade (and other commands) –limit “NODE1,NODE2,NIODE3”, to limit the nodes that are affected by the command to one or more that are specified in the limit clause. However, when I did an upgrade, specifying ONLY a worker node, the process failed at this step:

TASK [kubernetes-apps/network_plugin/multus : Multus | Start resources] ********
fatal: [niobe -> {{ groups['kube_control_plane'][0] }}]: FAILED! => {"msg": "Error in jmespath.search in json_query filter plugin:\n'ansible.vars.hostvars.HostVarsVars object' has no attribute 'multus_manifest_2'"}

The problem is, that I don’t have Multus enabled! It turns out that there is a bug in Kubespray, such that you need to have a control plane node included in the limit clause, so that it will parse that Multus is disable and will not attempt to start it up on the worker node. I just re-ran the upgrade specifying one control plane node (already upgraded) and the worker node I wanted to update..

Node name changes: OK, this was stupid. I named my nodes after characters from the movie “The Matrix” (Apoc, Cypher, Morpheus,…). Since the original install, I’ve been playing with updating Kubespray versions, updating Kubernetes, installing things like Prometheus and Longhorn, and working through the problem I had with CoreDNS version changing during upgrades. Recently, I realized that one of my worker nodes was actually named incorrectly. It was “niobi” and not “niobe”. I changed my inventory and rename the hostname on the node. At one point, I decided to retest upgrades (with the node local DNS enabled). I did this by checking out tags that I had created for my repo and the Kubespray repo, performing a clean install, updating the repos to newer tags or the latest commit, updating the Poetry environment so that the correct tool versions were used with the Kubespray version I was trying, and then doing an upgrade. The upgrade was failing on node “niobe”, and it took me a while to realize that when I did the install, the node was named “niobi”, but when I did the upgrade, it was named “niobe” (with the same IP). The (simple) fix, was to do fix the hostname in the inventory, before doing the initial install.

In the future, I think it is probably best to do the kubernetes/kubespray update separate from other components. In addition, I think the update should be done a node at a time, starting with control plane nodes, and then worker nodes. Kubespray does have a limit option to restrict to a node. They say to run facts.yml to update info on all nodes, update control plane/etcd nodes, and then do worker nodes:

ansible-playbook playbooks/facts.yml -b -i ../picluster/inventory/mycluster/hosts.yaml -u ${USER} -b -v --private-key=~/.ssh/id_ed25519

ansible-playbook upgrade-cluster.yml -b -i ../picluster/inventory/mycluster/hosts.yaml -e kube_version=v1.29.3 --limit "kube_control_plane:etcd" -u ${USER} -b -v --private-key=~/.ssh/id_ed25519

ansible-playbook upgrade-cluster.yml -b -i ../picluster/inventory/mycluster/hosts.yaml -e kube_version=v1.29.3 --limit "morpheus:niobi:switch" -u ${USER} -b -v --private-key=~/.ssh/id_ed25519

I used this on a re-try of the upgrade and the facts and control plane/etc steps worked fine, but I hit an error in the downloading step for the worker nodes. Just note that, with the current Kubespray, you probably should include one control plane node, when upgrading one or more worker nodes, so that the configuration is handled correctly.

Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on Cluster Upgrade – Challenge
February 25

MySQL With Replicas on Raspberry PI Kubernetes

I Know that I’ll need a database for several projects that I want to run on my Raspberry PI based Kubernetes cluster, so I did some digging for blogs and tutorials on how to set this up.

I found some general articles on how to setup MySQL, and even one that talked about setting up multiple pods so that there are replicas for the database. Cool!

However, I had difficulty in finding information on doing this with ARM64 based processors. I found this link on how to run an MySQL operator and InnoDB with multiple replicas for ARM64 processors, but it had two problems. First, it used a fork of the upstream repository for the MySQL operator and had not been updated in over a year, so images (which were in a repo in that account) were older. Second, it made use of a “mysql-router” image, from a repo in the same account, but it didn’t exist!

So, I spent several days, trying to figure out how to get this to work, and then how to use it with the latest images that are available for ARM64 processors. I could not figure out how to build images from a forked repo, as it seems that the build scripts are setup for Oracle’s CI/CD system and there is no documentation on how to manually build. In any case, using information from this forked repo and after doing a lot of sleuthing, I have it working…

The MySQL Operator repo contains both the operator and the innodbcluster components. They are designed to work with AMD64 based processors, and there is currently no ARM64 support configured. When I asked on the MySQL operator Slack channel as of the February 2024, they indicated that the effort to support ARM64 has stalled, so I decided to figure out how to use this repo, customizing it to provide the needed support.

I used Helm versus manifests, to set things up. First, I setup an area to work and prepared to access my Raspberry PI Kubernetes cluster

cd ~/workspace/picluster
poetry shell

mkdir mysql
cd mysql

Add the mysql-operator repo:

helm repo add mysql-operator https://mysql.github.io/mysql-operator/
helm repo update

The operator chart can now be installed, but we need to tell it to use an ARM64 image of the Oracle community version of the operator. Here are the available operator versions to choose from. I’ll use the 8.3.0-2.1.2-aarch64 version:

helm install django-mysql-operator mysql-operator/mysql-operator -n mysql-operator --create-namespace --set image.tag="8.3.0-2.1.2-aarch64"

This creates a bunch of resources and most noticeable, a deployment, replica set, and pod for the operator, in the mysql-operator namespace. The name, django-mysql-operator’ is arbitrary. Check to make sure everything is running with:

kubectl get all -n mysql-operator
NAME                                  READY   STATUS    RESTARTS   AGE
pod/mysql-operator-6cc67fd566-v64dp   1/1     Running   0          7h21m

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/mysql-operator   ClusterIP   10.233.19.231   <none>        9443/TCP   7h21m

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql-operator   1/1     1            1           7h21m

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-operator-6cc67fd566   1         1         1       7h21m

Next, we can install the helm chart for the MySQL InnoDBCluster. Again, we need to select from available ARM64 versions for the community operator, community router (should be able to use same version), and MySQL server (pick a tag that supports both AMD64 and ARM64 – I used 8.0). Since there are so many changes, we’ll use a values.yaml file, instead of command line –set arguments.

We can get the current values.yaml file with:

helm show values mysql-operator/mysql-innodbcluster > innodb-values.yaml

In that file, you can see the defaults that would be applied, like number of replicas, and can do some additoinal customizations too. In all cases, if you use a values.yaml file, you MUST provide a root password. For our case, we select to use self signed certificates, and specify arm images for the container, sidecar, and a bunch of init containers. Here are just the changes needed, using the versions I chose at the time of this writing:

cat innodb-values.yaml
credentials:
  root:
    password: "PASSWORD YOU WANT"
# routerInstances: 1
# serverInstances: 3
tls:
  useSelfSigned: true
podSpec:
  initContainers:
    - name: fixdatadir
      image: container-registry.oracle.com/mysql/community-operator:8.3.0-2.1.2-aarch64
    - name: initconf
      image: container-registry.oracle.com/mysql/community-operator:8.3.0-2.1.2-aarch64
    - name: initmysql
      image: mysql/mysql-server:8.0
  containers:
    - name: mysql
      image: mysql/mysql-server:8.0
    - name: sidecar
      image: container-registry.oracle.com/mysql/community-operator:8.3.0-2.1.2-aarch64
router:
  podSpec:
    containers:
      - name: router
        image: container-registry.oracle.com/mysql/community-router:8.3.0-aarch64

Using this file, we can create the pods for the three MySQL pods using the command:

helm install django-mysql mysql-operator/mysql-innodbcluster -f innodb-values.yaml

It’ll create a deployment, replica, a stateful set, services, three pods, along with three PVs and PVCs, and a new innodbcluster resource and instance. The name provided ‘django-mysql’, will be the prefix for resources. They will take a while to come up, so have patience. Once the pods and statefulset are up, you see a router pod created and started:

$ kubectl get all
NAME                                       READY   STATUS    RESTARTS       AGE
pod/django-mysql-0                         2/2     Running   0              6h55m
pod/django-mysql-1                         2/2     Running   0              6h55m
pod/django-mysql-2                         2/2     Running   0              6h55m

NAME                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                                    AGE
service/django-mysql             ClusterIP   10.233.59.48   <none>        3306/TCP,33060/TCP,6446/TCP,6448/TCP,6447/TCP,6449/TCP,6450/TCP,8443/TCP   6h55m
service/django-mysql-instances   ClusterIP   None           <none>        3306/TCP,33060/TCP,33061/TCP                                               6h55m

NAME                                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/longhorn-iscsi-installation   7         7         7       7            7           <none>          51d

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/django-mysql-router   1/1     1            1           6h55m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/django-mysql-router-696545f47b   1         1         1       6h55m

NAME                            READY   AGE
statefulset.apps/django-mysql   3/3     6h55m

When everything is running, you can access the zero instance of the MySQL pod with:

kubectl exec -it pod/django-mysql-0 -c mysql -- /bin/bash
bash-4.4$ mysqlsh -u root -p

CREATE DATABASE IF NOT EXISTS todo_db;
USE todo_db;
CREATE TABLE IF NOT EXISTS Todo (task_id int NOT NULL AUTO_INCREMENT, task VARCHAR(255) NOT NULL, status VARCHAR(255), PRIMARY KEY (task_id));
INSERT INTO Todo (task, status) VALUES ('Hello','ongoing');

Enter in the password you defined in the innodb-values.yaml and you can now create a database, tables, and populate table entries. If you exec into one of the other MySQL pods, the information will be there as well, but will be read-only.

there are other customizations, like changing the number of replicas, the size of the PVs used, etc.

You can reverse the process, by first deleting the MySQL InnoDBCluster:

helm delete django-mysql

Wait until the pods are gone (it takes a while), and then delete the MySQL operator:

helm delete django-mysql-operator -n mysql-server

That should get rid of everything, but if, not here are other things that you can delete. Note: My storage class, Longhorn, is set to retain the PVs, so they must be manually deleted (I can’t think of an easier way):

kubectl delete sa default -n mysql-operator
kubectl delete sa mysql-operator-sa -n mysql-operator

kubectl delete pvc datadir-django-mysql-0
kubectl delete pvc datadir-django-mysql-1
kubectl delete pvc datadir-django-mysql-2
kubectl delete pv `kubectl get pv -A -o jsonpath='{.items[?(@.spec.claimRef.name=="datadir-django-mysql-0")].metadata.name}'`
kubectl delete pv  `kubectl get pv -A -o jsonpath='{.items[?(@.spec.claimRef.name=="datadir-django-mysql-1")].metadata.name}'`
kubectl delete pv  `kubectl get pv -A -o jsonpath='{.items[?(@.spec.claimRef.name=="datadir-django-mysql-2")].metadata.name}

I would like to figure out how to create a database and user, as part of the pod creation process, rather than having to exec into the pod and use mysql or mysqlsh apps.

I’d really like to be able to specify a secret for the root password, instead of including it into a vales.yaml file.

Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on MySQL With Replicas on Raspberry PI Kubernetes
February 12

S3 Storage In Kubernetes

In Part VII: Cluster Backup, I set up Minio running on my laptop to provide S3 storage that Velero can use to backup the cluster. In this piece, Minio will be setup “in cluster”, using Longhorn. There are a few links discussion how to do this. I didn’t try this method, but did give this a go (with a bunch of modifications), and am documenting it here.

For starters, I’m using the Helm chart for Minio from Bitnami:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

We’ll grab the configuration settings so that they can be modified:

mkdir -p ~/workspace/picluster/minio-k8s
cd ~/workspace/picluster/minio-k8s
helm show values bitnami/minio > minio.yaml

Create a secret to be used to access Minio:

kubectl create secret generic minio-root-user --namespace minio --from-literal=root-password="DESIRED-PASSWORD" --from-literal=root-user="minime"

In minio.yaml, set auth existingSecret to “minio-root-user” so that the secret will be used for authentication, set defaultBucket to “kubernetes”, and set service type to “NodePort”. The Minio deployment can be created:

helm install minio bitnami/minio --namespace minio --values minio.yaml

The Minio console can be accessed by using a browser, a node’s IP and the NodePort port:

kubectl get svc -n minio
NAME    TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
minio   NodePort   10.233.60.69   <none>        9000:32602/TCP,9001:31241/TCP   78m

In this case, using a one of the node’s (10.11.12.190) http://10.11.12.190:31241. Use the username and password you defined above, when creating the secret.

Now, we can install Velero, using the default bucket we had created (one could create another bucket from the Minio UI), credentials file, and cluster IP for the Minio service:

cat minio-credentials
[default]
aws_access_key_id = minime
aws_secret_access_key = DESIRED-PASSWORD

velero install \
     --provider aws \
     --plugins velero/velero-plugin-for-aws:v1.8.2 \
     --bucket kubernetes \
     --secret-file minio-credentials \
     --use-volume-snapshots=false \
     --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://10.233.60.69:9000

The backup location can be checked (and should be available):

velero backup-location get
NAME      PROVIDER   BUCKET/PREFIX   PHASE       LAST VALIDATED                  ACCESS MODE   DEFAULT
default   aws        kubernetes      Available   2024-02-12 20:43:23 -0500 EST   ReadWrite     true

Finally, you can test the backup and restore of a single deployment (using the example from Part VII, where we pulled the velero repo, which has an example NGINX app):

kubectl create namespace nginx-example
kubectl create deployment nginx --image=nginx -n nginx-example

velero backup create nginx-backup --selector app=nginx
velero backup describe nginx-backup
velero backup logs nginx-backup

kubectl delete namespace nginx-example

velero restore create --from-backup nginx-backup
velero restore describe nginx-backup-20240212194128

kubectl delete namespace nginx-example
velero backup delete nginx-backup
velero restore delete nginx-backup

There is a Minio client, although it seems to be designed for use with a cloud based back-end or local installation. It has predefined aliases for Minio, and is designed to run and terminate on each command. Unfortunately, we need to set a new alias, so that it can be used with later commands. We can hack a way into use it.

First, we need to know the Cluster IP address of the Minio service, so that it can be used later:

kubectl get svc -n minio
NAME    TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
minio   NodePort   10.233.60.69   <none>        9000:32602/TCP,9001:31241/TCP   78m

We get the user/password, and then run the client so that an alias (using cluster IP 10.233.60.69, in this case) can be created and commands invoked.

export ROOT_USER=$(kubectl get secret --namespace minio minio-root-user -o jsonpath="{.data.root-user}" | base64 -d)
export ROOT_PASSWORD=$(kubectl get secret --namespace minio minio-root-user -o jsonpath="{.data.root-password}" | base64 -d)

kubectl run --namespace minio minio-client \
     --tty -i --rm --restart='Never' \
     --env MINIO_SERVER_ROOT_USER=$ROOT_USER \
     --env MINIO_SERVER_ROOT_PASSWORD=$ROOT_PASSWORD \
     --env MINIO_SERVER_HOST=minio \
     --image docker.io/bitnami/minio-client:2024.2.9-debian-11-r0 -- \
    /bin/bash
mc alias set myminio http://10.233.60.69:9000 $MINIO_SERVER_ROOT_USER $MINIO_SERVER_ROOT_PASSWORD 
mc admin info myminio
...
Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on S3 Storage In Kubernetes
February 3

More Power! Adding nodes to cluster

I’ll document the process I used to add two more Raspberry Pi 4s to the cluster that I’ve created in this series.

Preparing The PIs

With two new Raspberry PI 4s, PoE+ hats, SSD drives (2 TB this time), and two more UCTRONICS RM1U-3 trays each with an OLED display, power button, SATA Shield card, and USB3 jumper, I set out to assemble the trays and image them with Ubuntu.

Everything went well, assembling the trays with the Raspberry PIs. In turn, I connected a keyboard, HDMI display, Ethernet cable, and power adapter (as I don’t have PoE hub in my study). Once booted, I followed the steps in Part II of the series, however there were some issues getting the OS installed.

First, the Raspberry PI Imager program has been updated to support PI 5s, so there were multiple menus, tabbed fields, etc. I decided to connect a mouse to the Raspberry PI, rather then enter a maze of tabs and enters and arrows to try to navigate everywhere.

Second, when I went to select the Storage Device, the SSD drive was not showing up. I didn’t know if this was an issue with the UCTRONICS SATA Shield, the different brand of drive, the larger capacity, the newer installer, or the Raspberry PI itself. I did a bunch of different things to try to find out the root cause, and finally found out that to make this work, I needed to image the SSD drive using the Raspberry PI Imager on my Mac, using a SATA to USB adapter, and then place it into the UCTRONICS tray along with the Raspberry PI and it would then boot to the SSD drive.

Third, for one of the two Raspberry PIs, this still did not work, and I ended up installing the Raspberry PI OS on an SD card, update the EEPROM and bootloader, and then net booted the Raspberry PI Installer, and then I was able to get the Raspberry PI to boot from the SSD drive. Probably a good idea to update the EEPROM and bootloader to the latest anyway.

Initial Setup

Like done in Part II of the series, I picked IP addresses for the two units, added their MAC addresses into my router so that those IPs were reserved, added the host names to my local DNS server, and create SSH keys for each and used “ssh-copy-id” to copy those keys to all the other nodes and my Mac, and vice versa. Connectivity was all set.

I decided NOT to do the repartitioning mentioned in Part III, and instead leave the drive as one large 2TB (1.8TB actually) drive. My hope is that with Kubernetes, I can monitor problems, so if I see log files getting out of hand, I can deal with it, rather than having fixed paritions for /tmp, /var, /home, etc.  I did create a /var/lib/longhorn directory – not sure if Longhorn would create this automatically.

Node Prep

With SSH access to each of the PIs, I could run through the same Ansible scripts that were used to setup all the other nodes as outlined in Part IV. Before running the scripts, I added the two nodes (morpheus, switch) to the hosts.yaml file in the inventory as worker nodes. There are currently, three master nodes, and four worker nodes.

When running these ansible scripts, I specified both hosts at once, rather than doing one at a time. For example:

cd ~/workspace/picluster
ansible-playbook -i "morpheus,switch" playbooks/passwordless_sudo.yaml -v --private-key=~/.ssh/id_ed25519 --ask-become-pass
ansible-playbook -i "morpheus,switch" playbooks/ssh.yaml -v --private-key=~/.ssh/id_ed25519
...

Now that the nodes are ready, they can be added to the cluster. For a control plane node, the cluster.yaml script is used:

cd ~/workspace/picluster/kubespray
ansible-playbook -i ../inventory/mycluster/hosts.yaml -u ${USER} -b -v --private-key=~/.ssh/id_ed25519 cluster.yml

Then, on each node, restart the NGINX proxy pod with:

crictl ps | grep nginx-proxy | awk '{print $1}' | xargs crictl stop

In our case, these will be worker nodes, and would be added with these commands (using limit so other nodes are not affected:

ansible-playbook -i ../inventory/mycluster/hosts.yaml -u ${USER} -b -v --private-key=~/.ssh/id_ed25519 --limit=morpheus scale.yml
ansible-playbook -i ../inventory/mycluster/hosts.yaml -u ${USER} -b -v --private-key=~/.ssh/id_ed25519 --limit=switch scale.yml

These two nodes added just fine, with the Kubernetes version v1.28.5, just like the control plane node I added before (my older nodes are still v1.28.2, but not sure how to update them currently).

Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on More Power! Adding nodes to cluster