July 10

Django App on Kubernetes

Viewmaster

For all the movies I own (500+), I had a spreadsheet listing them, so that when people visited, they could pick out a movie for us to watch. It was tedious, as I’d have to print it, or bring up the spreadsheet, and then, if they wanted to see a comedy, for example, I would sort by the “genre” column.

Wanting a better way to use this list of movies, I decided to make a web site with the information that would shows the title, genre, release date, rating, duration, and format (4K, Blu-ray, DVD). There were buttons to display the movies in multiple ways:

  • Alphabetical (e.g do I have “The Matrix”?)
  • Genre, then alphabetical (e.g. what comedy movies do I have?)
  • Genre, date, then alphabetical (e.g. what are the newest “SCI-FI” movies?)
  • Date, then alphabetical (e.g. what are the new releases that I have?)
  • Collection, then date (e.g. Die Hard movies in order)
  • Format, then alphabetical (e.g. what 4K movies do I have?)

There is a search box to look for a specific title, an option to see more details one each movie (aspect ratio, audio, cost, and collection keyword), and an option to include Laser Discs. I don’t have a LD player anymore, but I use the covers of the movies as wall hangings and still have about 60 discs.

I created a Django app or the web site, set it up to run in a Docker container, and made a script to import the spreadsheet info I had, into the movie database. This ran on a Raspberry Pi4 and was accessible locally on my network.

Now that I have a Kubernetes cluster, I want to port this web based Docker app into my cluster.

 

The Plan…

Here are the goals for this effort:

  • Use a deployment with one instance of the app running on a pod.
  • Instead of having a SQLite database in a file on the host, use a database like Postgres.
  • Have the database of movie information in Longhorn storage, so I can back it up.
  • Put confidential info into Secrets. Don’t have anything confidential in the app.
  • (Optionally) Make this web app accessible from outside my home, using HTTPS (make use of the NGINX Virtual Server I’ve already set up for my Emby music server).
  • Use a separate namespace for this app, rather than the “default”, to isolate things.

I found some videos on how to port Django apps to Kubernetes, and each were doing things slightly differently. So I used one method, sprinkled in some ideas from other methods, and added some more things that I wanted. Let’s get started on the journey…

 

Collect Together The Needed Items

First, I cloned the docker implementation of my app into my work area for Kubernetes. This has the typical Django development tree structure, plus a Dockerfile I used to package things up, and the SQLite3 database file that was used by that implementation (the Dockerfile mapped the ./DBase/movies.db file from the GIT repo on the host, to a mount point in the container – this way I could backup the database periodically).

You can take whatever Django app you have to do the same porting effort, whether it has a Docker setup or not. Here is my viewmaster app as an example Django app:

cd ~/workspace/kubernetes/
git clone https://github.com/pmichali/viewmaster.git
cd viewmaster
mkdir deploy

The master branch has the code right before I started the porting effort. The k8s-port branch has any app code changes, and the manifests and supporting files that I used to port to Kubernetes.

 

Prepare Settings

Create an environment file with the values you want for secrets (viewmaster-secrets.env):

cd deploy

SECRET_KEY='a unique string that django will use'
DB_HOST=viewmaster-postgres
POSTGRES_DB=name-of-your-database
POSTGRES_USER=name-for-your-db-user
POSTGRES_PASSWORD='pass-you-want-for-database'
PUBLIC_DOMAIN=movies.my-domain.com

The first is a secret key used for cryptographic signing in Django. The last one is for app use, and the others are for the database (fill in the items in red). Create the secrets and then remove the file:

kubectl create namespace viewmaster
kubectl create secret generic viewmaster-secrets -n viewmaster --from-env-file=viewmaster-secrets.env
rm viewmaster-secrets.env

Create a config map, which has settings for both Django and a Postgres database (viewmaster-configmap.yaml):

apiVersion: v1
kind: ConfigMap
metadata:
name: viewmaster-cm
namespace: viewmaster
data:
ALLOWED_HOSTS: "*"
LOGLEVEL: "info"
DEBUG: "0"
PGDATA: "/var/lib/postgresql/data/db-files/"

Of note is PGDATA, which tells Postgres to use a directory below the mount point that we will create, so that Postgres will not complain about a non-empty directory (it will have a .lost-found directory). Do a “kubectl apply -f viewmaster-configmap.yaml” to create the config map.

 

Deploy The Database

I created a manifest (postgres.yaml) with everything needed to deploy the Postgres database that I want to use:

apiVersion: v1
kind: Service
metadata:
  name: viewmaster-postgres
  namespace: viewmaster
  labels:
    app: viewmaster
spec:
  ports:
    - port: 5432
  selector:
    app: viewmaster
    tier: postgres
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: viewmaster-postgres-pvc
  namespace: viewmaster
  labels:
    app: viewmaster
spec:
 accessModes:
   - ReadWriteOnce
 resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: viewmaster
labels:
app: viewmaster-postgres
spec:
selector:
matchLabels:
app: viewmaster
tier: postgres
strategy:
type: Recreate
template:
metadata:
labels:
app: viewmaster
tier: postgres
spec:
volumes:
- name: viewmaster-data
persistentVolumeClaim:
claimName: viewmaster-postgres-pvc
containers:
- image: postgres:16.3-alpine
name: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: viewmaster-data
mountPath: /var/lib/postgresql/data
envFrom:
- secretRef:
name: viewmaster-secrets
- configMapRef:
name: viewmaster-cm

First, we have the service that will use port 5432 with no IP assigned. Second, is the 10 GB persistent volume claim using our default Longhorn storage. Finally, we have the deployment with a container using a current version of Postgres, referencing port 5432, and mounting using the PVC defined for the mount of the data area Postgres uses. The environment settings used by Postgres will come from the secret and config map created.

Do a “kubectl apply -f postgres.yaml”. There should be a deployment, replicaset, service and pod running for Postgres. In addition, there will be a 10 GB PV created and a claim.

 

Modify App To Use Environment Variables

In preparation to running things under Kubernetes, we want to remove the hard coding of secrets and other confidential information from the Django application, and obtain the values from environment variables that will be passed in. For the Viewmaster app, I moved to the movie_library/movie_library/ area in the repo and edited settings.py to change/add these lines:

import os

SECRET_KEY = os.environ.get('SECRET_KEY', 'changeme')

DEBUG = bool(int(os.environ.get('DEBUG', 0)))

ALLOWED_HOSTS = []
ALLOWED_HOSTS.extend(
filter(
None,
os.environ.get('ALLOWED_HOSTS', '').split(','),
)
)

MIDDLEWARE = [
...
'whitenoise.middleware.WhiteNoiseMiddleware',
]

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'HOST': os.environ.get('DB_HOST'),
'NAME': os.environ.get('POSTGRES_DB'),
'USER': os.environ.get('POSTGRES_USER'),
'PASSWORD': os.environ.get('POSTGRES_PASSWORD'),
}
}

STATIC_URL = 'static/'
STATIC_ROOT = '/vol/web/static'
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'

We get the secret key, debug flag, and allowed hosts from environment variables passed to the app at startup. The database engine is set to Postgress and environment variables used for the host, database name, username, and password (removing what was there for SQLite). I could have used database agnostic names for these, but since they are shared with the Postgres pod, I used the same names (versus duplicating entries).

Because I switched from Django’s “runserver” to “gunicorn” and I’m not running in debug mode, I had to add the Whitenoise middleware, and specify STATIC_ROOT and STATICFLES_STORAGE, so that static files could be located.

Since I didn’t want to have the movie listing to require the path /viewmaster/, I changed the urlpattern in urls.py in the ./movie_library/movie_library/ area of the repo, to use the root of the HTML tree:

 urlpatterns = [
- path('viewmaster/', include('viewmaster.urls')),
+ path('', include('viewmaster.urls')),

Another cleanup item in the Viewmaster project, is an unused sqlalchemy import in ./movie_library/viewmaster/views.py (my bad). When converting over to Kubernetes, we won’t be including that package, so delete the import.

The latest code in the k8s-port branch of the repo has all these changes.

 

Build Image For Kubernetes

The next goal is to create a docker image for the Django app. I already have a Dockerfile at the top of the repo (~/workspace/kubernetes/viewmaster/), so I’ll just modify it to look like this:

FROM python:3.12.4

# Python and setup timezone
RUN apt-get update -y && apt-get install -y software-properties-common python3-pip postgresql-client

# Fault handler dumps traceback on seg faults
# Unbuffered sends stdout/stderr to log vs buffering
ENV CODEBASE=/code \
PYTHONENV=/code \
PYTHONPATH=/code \
EDITOR=vim \
PYTHONFAULTHANDLER=1 \
PYTHONUNBUFFERED=1 \
PYTHONHASHSEED=random \
PIP_NO_CACHE_DIR=off \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
POETRY_VERSION=1.8.2

# System dependencies
RUN pip3 install "poetry==$POETRY_VERSION"

# Copy over all needed files
WORKDIR /code
COPY poetry.lock pyproject.toml runserver.bash /code/
COPY movie_library/ /code/movie_library/

# setup tools for environment, using pyproject.toml file
RUN poetry config virtualenvs.create false && \
poetry install

EXPOSE 80

# CMD sleep infinity
CMD ["/code/runserver.bash"]

I included the Postgres client package, in case I wanted to access the database from this pod (it is included in the Postgres pod we already created). I removed the user account setup lines, and added a line to expose port 80. Other things to consider, when doing this, is whether you want to update the Python base image version, and the Poetry version.

There are two other related changes. The runserver.bash file, in the same area, was changed to this:

#!/bin/bash
cd /code/movie_library
python manage.py collectstatic --noinput
python manage.py migrate
gunicorn -b :8080 movie_library.wsgi:application

Instead of running the built-in Django server, the script now does collectstatic, migration, and then runs the gunicorn server for our Python app using port 8080 (instead of 8642).

The pyproject.toml file, which contains the package definitions used, is changed to contain:

[tool.poetry]
name = "viewmaster"
version = "0.1.1"
description = "My movies"
authors = ["YOUR NAME <YOUR_EMAIL_ADDRESS>"]
readme = "README.md"
package-mode = false

[tool.poetry.dependencies]
python = "^3.12"
django = "^4.2.14"
django-auditlog = "^2.3.0"
psycopg = "^3.2.1"
gunicorn = "^22.0.0"
whitenoise = "^6.7.0"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

I bumped the minor version number. The xlrd, openpyxl, sqlalchemy, and pandas packages are removed and the psycopg, gunicorn, and whitenoise packages are added. On your host, you can do ‘poetry update’ and if needed, update versions in the pyproject.toml file for the versions you are using. When the docker image is created, it will install these packages into container and setup PATH to reference the environment.

Now, from the top of the repo, we can build the docker image locally with:

docker buildx build . -t YOUR_DOCKER_ID/viewmaster-app:v0.1.1

With that completed, and assuming you have an account setup on Docker Hub, you can push the image up to your account:

docker push YOUR_DOCKER_ID/viewmaster-app:v0.1.1

It’s a good idea to use a different version, each time you update your app, so that when you deploy into Kubernetes it will download the updated image (assuming you update the deployment version, of course). Initially, I was using “latest”, but I had to set the image pull policy for the container to “Always”, instead of “IfNotPresent”.

 

Deploy The Django App

In the ./deploy/ area, create a manifest (django.yaml), to deploy the Viewmaster app:

apiVersion: v1
kind: Service
metadata:
name: viewmaster-service
namespace: viewmaster
labels:
app: viewmaster
spec:
ports:
- port: 8000
targetPort: 8080
name: http
selector:
app: viewmaster
tier: app
type: LoadBalancer

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: viewmaster-app-pvc
namespace: viewmaster
labels:
app: viewmaster
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

---

apiVersion: apps/v1
kind: Deployment
metadata:
name: viewmaster
namespace: viewmaster
labels:
app: viewmaster
spec:
selector:
matchLabels:
app: viewmaster
tier: app
strategy:
type: Recreate
template:
metadata:
labels:
app: viewmaster
tier: app
spec:
volumes:
- name: viewmaster-app-data
persistentVolumeClaim:
claimName: viewmaster-app-pvc
containers:

- image: pmichali/viewmaster-app:v0.1.1
imagePullPolicy: Always # IfNotPresent
name: app
ports:
- containerPort: 8080
name: app
volumeMounts:
- name: viewmaster-app-data
mountPath: /vol/web
envFrom:
- secretRef:
name: viewmaster-secrets
- configMapRef:
name: viewmaster-cm

We create a service, listening on port 8000, and using load balancer for a “public” IP. A persistent volume of 10 GB will be used for the app. Finally, a deployment with the container image that was built, a volume mapping for data, and environment information from the config map and secrets defined.

Note that I’m setting it to pull the image “Always”, because I’m going through iterations. Once done, you can set this to IfNotPresent. Otherwise, you are forced to update the version tag, and build/push with the new tag, for each iteration.

Do a “kubectl apply -f django.yaml” and make sure the pod is running. You can setup the superuser account by exec-ing into the viewmaster pod and running the createsuperuser command. For example:

kubectl exec -it -n viewmaster viewmaster-6c956ddb66-sxq4f -- /bin/bash
cd movie_library
python manage.py createsuperuser

Enter in a username, email address, and password. While in the pod, you can access the database with the database shell command:

python manage.py dbshell

From here, you can view all the tables that were created, when the viewmaster app was started, by doing “\dt”:

viewmasterdb=# \dt
List of relations
Schema | Name | Type | Owner
--------+----------------------------+-------+--------------
public | auditlog_logentry | table | viewmasterer
public | auth_group | table | viewmasterer
public | auth_group_permissions | table | viewmasterer
public | auth_permission | table | viewmasterer
public | auth_user | table | viewmasterer
public | auth_user_groups | table | viewmasterer
public | auth_user_user_permissions | table | viewmasterer
public | django_admin_log | table | viewmasterer
public | django_content_type | table | viewmasterer
public | django_migrations | table | viewmasterer
public | django_session | table | viewmasterer
public | viewmaster_movie | table | viewmasterer
(12 rows)

You can verify that the superuser account is correct, with the “select * from auth_user;” command. This shell can be used to import existing movie data…

 

Import Existing Data

Rather than re-enter all the movie information into this new Kubernetes based implementation, I wanted to export/import what I already have. In the repo I provided, there is a ./DBase/importVM.sql file with the data to import for my app, but I want to detail how this was created, as it wasn’t exactly trivial.

The Docker implementation had a SQLite database in ./DBase/movies.db. The first step was to export the database as a .sql file. I did the following:

cd DBase
sqlite3
.open movies.db
.once export.sql
.dump
.quit

From the export.sql file, I want the “viewmaster_movies” table. I created the file (importVM.sql) with the INSERT lines for that table from the export.sql file, all wrapped inside of “BEGIN TRANSACTION;” and “COMMIT;” lines, so that the Progres database would only be updated if all the lines could be processed:

BEGIN TRANSACTION;
INSERT INTO viewmaster_movie VALUES(1,'12 Monkeys',1995,'SCI-FI','02:10:00.000000','LD','LB','D-SURR','',25,1,0,'R');
...
INSERT INTO viewmaster_movie VALUES(656,'Shawshank Redemption',1994,'DRAMA','02:22:00','4K','1.85:1','DTS-HD','',20.39000000000000056,1,0,'R');

COMMIT;

Unfortunately, there are differences between SQLite and Postgres. If we look at the field layout in the Postgres database, we see (trimmed):

viewmasterdb=# \d viewmaster_movie
Table "public.viewmaster_movie"
Column | Type | Nullable |
------------+------------------------+----------+
id | bigint | not null |
title | character varying(60) | not null |
release | integer | not null |
category | character varying(20) | not null |
rating | character varying(5) | not null |
duration | time without time zone | not null |
format | character varying(3) | not null |
aspect | character varying(10) | not null |
audio | character varying(10) | not null |
collection | character varying(10) | not null |
cost | numeric(6,2) | not null |
paid | boolean | not null |
bad | boolean | not null |

When I look at the table definition (reformatted for readability) in the export.sql file, I see:

CREATE TABLE IF NOT EXISTS "viewmaster_movie" (
"id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"title" varchar(60) NOT NULL,
"release" integer NULL,
"category" varchar(20) NOT NULL,
"duration" varchar(5) NULL,
"format" varchar(3) NULL,
"aspect" varchar(10) NULL,
"audio" varchar(10) NULL,
"collection" varchar(10) NULL,
"cost" decimal NULL,
"paid" bool NULL,
"bad" bool NULL,
"rating" varchar(5) NULL
);

As you can see, the rating field is in a different position. This means that it will be in the wrong place in the existing INSERT lines, as the Postgres database is expecting the rating to be the fifth field and not the last field:

INSERT INTO viewmaster_movie VALUES(1,'12 Monkeys',1995,'SCI-FI','02:10:00.000000','LD','LB','D-SURR','',25,1,0,'R');

I decided that the easiest way to deal with this, is to add the ordering to the INSERT lines (added text in red), so they each look like this:

INSERT INTO viewmaster_movie ("id", "title", "release", "category", "duration", "format", "aspect", "audio", "collection", "cost", "paid", "bad", "rating")
VALUES(1,'12 Monkeys',1995,'SCI-FI','02:10:00.000000','LD','LB','D-SURR','',25,1,0,'R');

Essentially, we’re telling the insert command the order of the fields, rather than assuming they are in the same order as defined in the database. There can be cases, where in your new database, you named fields (or tables) differently, so this specification of fields can help.

Another issue is that SQLite export of boolean values use the numbers zero and one, whereas Postgres thinks these are integers. I ended up using my editor to wrap the values in single quotes (‘0’ and ‘1’), so that they are evaluated as boolean values. I made use of Emacs macros to do this quoting of the second and third from last values. I read later that one can change 0 to 0::boolean and 1 to 1::boolean.

With the importVM.sql file hopefully ready, I copied it to the viewmaster pod:

kubectl cp importVM.sql viewmaster/viewmaster-6c956ddb66-sxq4f:movie_library/importVM.sql

From the database shell that I have open on the viewmaster pod, I can import the table contents:

viewmasterdb=# \i importVM.sql

There is a good chance that this may fail, so you’ll have to scroll through the output and find any problems and correct them. In my case, I saw:

  • One entry had a value of ‘2’ for a boolean, had to change to ‘1’.
  • A few entries where the “audio” field was longer than the defined 10 chars max. Shortened them.
  • There were some cases of aspect ratio 16:9, which were treated as a time value with extra characters for seconds/microseconds and exceed width. Changed to “16×9”.
  • Another that had ans aspect ratio of “02:40:01.000000”, again value was treated as a time value. Changed to “2.40:1”.

Finally, the import was successful and I could do a “select * from viewmaster_movie;” from the database shell to see the entries. I’ve included the final ~/DBase/importVM.sql file in the repo, so that if you are following along, you can just import it.

Now, with some real data and a user account, we can get the IP of the service:

kubectl get svc -n viewmaster
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
viewmaster LoadBalancer 10.233.1.98 10.11.12.207 8000:30761/TCP 168m
viewmaster-postgres ClusterIP None <none> 5432/TCP 17h

With a browser, I can navigate to the app at http://10.11.12.207:8000/viewmaster/ and see all the existing movies.

UPDATE: ASee “Create Movie Issue” below, for another problem that I found, after importing and using the system.

 

Secure Remote Access

Just like I did with the Emby music server I setup under Kubernetes, I want to do the same thing for this Django app. There are already some pieces in place, namely Traefik ingress is running in the cluster to route external requests to the app and redirect HTTP requests to HTTPS, cert-manager is running to create and manage Let’s Encrypt certificates, and the router is directing external HTTP/HTTPS requests to the ingress controller.

Prep Work

Specific to this Django app, there are some things that need to be set up. Like done in the Emby post, I need to create another sub-domain for this app (e.g. music.my-domain.com), and create a CNAME record that points to the Dynamic DNS service I use, so that HTTP/HTTPS requests to that subdomain will also make it to Kubernetes.

For my Django app, I had already installed the recommended security middleware. However, at a minimum, one also needs to define the “trusted origin” domains so as not not trigger the Cross Site Request Forgery (CSRF) warnings. I had to add the following line to ./movie_library/movie_library/settings.py:

CSRF_TRUSTED_ORIGINS = ['https://' + os.environ.get('PUBLIC_DOMAIN', 'missing-domain-name')]

Now, depending on how you wrote your Django app and what external resources you use, you may need to configure other CSRF settings. The easiest (?) way to figure out what you need, is to exercise your site via HTTPS with Django running in debug mode, and then it will show any CSRF errors and will provide a link with more info on the problem and how to fix it. Here is an example from one (non-Django) site I had:


CSP_IMG_SRC = ("'self'")
CSP_DEFAULT_SRC = ("'self'")
CSP_STYLE_SRC = ("'self'", 'https://fonts.googleapis.com')
CSP_SCRIPT_SRC = ("'self'")
CSP_FONT_SRC = ("'self'", 'https://fonts.gstatic.com')
CSP_FRAME_ANCESTORS = ("'none'")
CSP_FORM_ACTION = ("'self'")

These are indicating the allowed sources for various resources accessed.

Obviously, you’ll need to do this AFTER you have HTTPS remote access running, and it may take several iterations to resolve all the issues. That is why I set the image pull policy to “Always”, instead of “IfNotPresent” in the Deployment manifest for my app. This way, I can change the app, re-build, re-push to hub.docker.com, and then delete my viewmaster pod and it will pull the new image and use it.

Otherwise, you need to update the minor version in the ./pyproject.toml, build/push the app with a new tag, and change the deployment to reference the newer tag.

Ready, Set, Go…

Now, I need to perform the steps to create a certificate and to hookup ingress to my app. The explanation is brief, but you can see a more detailed description in the Emby post.

I’ll again use a Let’s Encrypt staging certificate, and once things are working, will use the production certificate. There is a rate limit on production certificates, so if you mess things up and try too many times, you’ll get locked out for a week!

Here is the staging issuer that I created and applied (./deploy/viewmaster-issuer.yaml):

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: viewmaster-issuer
namespace: viewmaster
spec:
acme:
email: your-email-address
# We use the staging server here for testing to avoid hitting
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# if not existing, it will register a new account and stores it
name: viewmaster-issuer-account-key
solvers:
- http01:
# The ingressClass used to create the necessary ingress routes
ingress:
class: traefik

This is in the same namespace as the app, requires an email address, and is using the staging certificate. With that applied, we can create the ingress for the app (./deploy/viewmaster-ingress.yaml):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: viewmaster
namespace: viewmaster
annotations:
cert-manager.io/issuer: "viewmaster-issuer"
traefik.ingress.kubernetes.io/router.middlewares: secureapps-redirect2https@kubernetescrd
spec:
tls:
- hosts:
- movies.my-domain.com
secretName: tls-viewmaster-ingress-http
rules:
- host: movies.my-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: viewmaster-service
port:
name: http

This references the issuer, uses the middleware to force HTTP to HTTPS redirect, has the subdomain name that I’ll use, and gives a name for the secret used to hold the staging certificate. It points to the viewmaster service and that uses the /viewmaster path. Once applied, you can look for the tls-viewmaster-ingress-http cert in the viewmaster namespace to be ready. Look through the info on the Emby page for details on the certificate creation process. It’ll take a minute or so to complete.

Now you can go to https://viewmaster.my-domain.com/viewmaster/ and see the site. If use use HTTP, it should redirect. Your browser will warn that it is insecure, but you can continue and look at the certificate info to see that it is a Let’s Encrypt staging certificate.

With it working, you can delete the ingress, secret, and issuer(if desired) and then apply the production issuer (./deploy/viewmaster-prod-issuer.yaml):

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: viewmaster-prod-issuer
namespace: viewmaster
spec:
acme:
email: your-email-address
# We use the staging server here for testing to avoid hitting
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# if not existing, it will register a new account and stores it
name: viewmaster-issuer-account-key
solvers:
- http01:
# The ingressClass used to create the necessary ingress routes
ingress:
class: traefik

I used a different name, so that both issuers can be present at the same time. You provide an email address, and it is using the production Let’s Encrypt URL.

The production ingress (./deploy/viewmaster-prod-ingress.yaml):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: viewmaster
namespace: viewmaster
annotations:
cert-manager.io/issuer: "viewmaster-prod-issuer"
traefik.ingress.kubernetes.io/router.middlewares: secureapps-redirect2https@kubernetescrd
spec:
tls:
- hosts:
- movies.my-domain.com
secretName: viewmaster-prod-cert
rules:
- host: movies.my-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: viewmaster-service
port:
name: http

This is the same, only using the viewmaster-prod-issuer, and viewmaster-prod-cert certificate. Once applied and the certificate is created, you can access with HTTPS, without any insecure warning.  The cert-manager will renew the certificate automatically, as needed.

With all this done, you can access the site via https://movies.my-domain.com, and if you use HTTP, it will automatically redirect to HTTPS. If you want to access from within the local network, you can use HTTP with the IP of the viewmaster service and port 8000. I didn’t explore into how to access it securely from inside the local network.

 

Create Movie Issue

In my playing with this ported app, I tried to add a movie. When I did so (under debug mode), I got an error saying:

duplicate key value violates unique constraint "viewmaster_movie_pkey"
DETAIL:  Key (id)=(1) already exists.

It looks like the database insert is not using the next ID. I did a “kubectl exec” into the viewmaster app pod, moved down to the movie_library/ directory, and did “python manage.py dbshell” to look at the database. First, I checked that there was a primary key for the viewmaster_movie database:

# \d viewmaster_movie;
Table "public.viewmaster_movie"
Column | Type | Collation | Nullable |...
------------+------------------------+-----------+-----------...
id | bigint | | not null |...
title | character varying(60) | | not null |
release | integer | | not null |
category | character varying(20) | | not null |
rating | character varying(5) | | not null |
duration. | time without time zone | | not null |
format | character varying(3) | | not null |
aspect | character varying(10) | | not null |
audio | character varying(10) | | not null |
collection | character varying(10) | | not null |
cost. | numeric(6,2) | | not null |
paid. | boolean | | not null |
bad | boolean | | not null |
Indexes:
"viewmaster_movie_pkey" PRIMARY KEY, btree (id)

That looked good, so I was trying to figure out how Postgres picks the next ID to use. I see that there is a “sequence” so I did:

# SELECT relname sequence_name FROM pg_class WHERE relkind = 'S';
sequence_name
-----------------------------------
django_migrations_id_seq
...
viewmaster_movie_id_seq

Looking at the sequence for the viewmaster_movie table, I see that the last_value is “1”, instead of the next value to use:

# select * from viewmaster_movie_id_seq;
 last_value | log_cnt | is_called
------------+---------+-----------
          1 |      32 | t

I determined the maximum value in use, and changed the last value to that:

# select max(id) from viewmaster_movie;
max
-----
656

# select setval('viewmaster_movie_id_seq', 656);
setval
--------
656

Now, when I do create, it works! Whew! I found out later, that with Postgres, you can set the id field type to “SERIAL” instead of “BIGINT” and that should create the correct sequencing. I haven’t tried it here, but it worked on a database for another Django app I was porting.

 

TODOs…

Future Items to consider:

  • Add non-admin login and modify app so that everyone has to login to see the pages (to limit viewing)?
  • Decide if want single cert for all subdomains running under Kubernetes, instead of one per app.
  • App enhancements:
    • See if can access public information for artwork and maybe description information for movies? Can we get technical specs too (run time, sound, aspect ratio)?
    • Persist checkbox settings for “Show details” and “Show LDs”.
    • Allow search initiation, when pressing enter, after entering in search phrase.
    • Add index (alphabet, category, date, collection, disk format) at top to allow jumping down to a section.
Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on Django App on Kubernetes