February 25

MySQL With Replicas on Raspberry PI Kubernetes

I Know that I’ll need a database for several projects that I want to run on my Raspberry PI based Kubernetes cluster, so I did some digging for blogs and tutorials on how to set this up.

I found some general articles on how to setup MySQL, and even one that talked about setting up multiple pods so that there are replicas for the database. Cool!

However, I had difficulty in finding information on doing this with ARM64 based processors. I found this link on how to run an MySQL operator and InnoDB with multiple replicas for ARM64 processors, but it had two problems. First, it used a fork of the upstream repository for the MySQL operator and had not been updated in over a year, so images (which were in a repo in that account) were older. Second, it made use of a “mysql-router” image, from a repo in the same account, but it didn’t exist!

So, I spent several days, trying to figure out how to get this to work, and then how to use it with the latest images that are available for ARM64 processors. I could not figure out how to build images from a forked repo, as it seems that the build scripts are setup for Oracle’s CI/CD system and there is no documentation on how to manually build. In any case, using information from this forked repo and after doing a lot of sleuthing, I have it working…

The MySQL Operator repo contains both the operator and the innodbcluster components. They are designed to work with AMD64 based processors, and there is currently no ARM64 support configured. When I asked on the MySQL operator Slack channel as of the February 2024, they indicated that the effort to support ARM64 has stalled, so I decided to figure out how to use this repo, customizing it to provide the needed support.

I used Helm versus manifests, to set things up. First, I setup an area to work and prepared to access my Raspberry PI Kubernetes cluster

cd ~/workspace/picluster
poetry shell

mkdir mysql
cd mysql

Add the mysql-operator repo:

helm repo add mysql-operator https://mysql.github.io/mysql-operator/
helm repo update

The operator chart can now be installed, but we need to tell it to use an ARM64 image of the Oracle community version of the operator. Here are the available operator versions to choose from. I’ll use the 8.3.0-2.1.2-aarch64 version:

helm install django-mysql-operator mysql-operator/mysql-operator -n mysql-operator --create-namespace --set image.tag="8.3.0-2.1.2-aarch64"

This creates a bunch of resources and most noticeable, a deployment, replica set, and pod for the operator, in the mysql-operator namespace. The name, django-mysql-operator’ is arbitrary. Check to make sure everything is running with:

kubectl get all -n mysql-operator
NAME                                  READY   STATUS    RESTARTS   AGE
pod/mysql-operator-6cc67fd566-v64dp   1/1     Running   0          7h21m

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/mysql-operator   ClusterIP   10.233.19.231   <none>        9443/TCP   7h21m

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql-operator   1/1     1            1           7h21m

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-operator-6cc67fd566   1         1         1       7h21m

Next, we can install the helm chart for the MySQL InnoDBCluster. Again, we need to select from available ARM64 versions for the community operator, community router (should be able to use same version), and MySQL server (pick a tag that supports both AMD64 and ARM64 – I used 8.0). Since there are so many changes, we’ll use a values.yaml file, instead of command line –set arguments.

We can get the current values.yaml file with:

helm show values mysql-operator/mysql-innodbcluster > innodb-values.yaml

In that file, you can see the defaults that would be applied, like number of replicas, and can do some additoinal customizations too. In all cases, if you use a values.yaml file, you MUST provide a root password. For our case, we select to use self signed certificates, and specify arm images for the container, sidecar, and a bunch of init containers. Here are just the changes needed, using the versions I chose at the time of this writing:

cat innodb-values.yaml
credentials:
  root:
    password: "PASSWORD YOU WANT"
# routerInstances: 1
# serverInstances: 3
tls:
  useSelfSigned: true
podSpec:
  initContainers:
    - name: fixdatadir
      image: container-registry.oracle.com/mysql/community-operator:8.3.0-2.1.2-aarch64
    - name: initconf
      image: container-registry.oracle.com/mysql/community-operator:8.3.0-2.1.2-aarch64
    - name: initmysql
      image: mysql/mysql-server:8.0
  containers:
    - name: mysql
      image: mysql/mysql-server:8.0
    - name: sidecar
      image: container-registry.oracle.com/mysql/community-operator:8.3.0-2.1.2-aarch64
router:
  podSpec:
    containers:
      - name: router
        image: container-registry.oracle.com/mysql/community-router:8.3.0-aarch64

Using this file, we can create the pods for the three MySQL pods using the command:

helm install django-mysql mysql-operator/mysql-innodbcluster -f innodb-values.yaml

It’ll create a deployment, replica, a stateful set, services, three pods, along with three PVs and PVCs, and a new innodbcluster resource and instance. The name provided ‘django-mysql’, will be the prefix for resources. They will take a while to come up, so have patience. Once the pods and statefulset are up, you see a router pod created and started:

$ kubectl get all
NAME                                       READY   STATUS    RESTARTS       AGE
pod/django-mysql-0                         2/2     Running   0              6h55m
pod/django-mysql-1                         2/2     Running   0              6h55m
pod/django-mysql-2                         2/2     Running   0              6h55m

NAME                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                                    AGE
service/django-mysql             ClusterIP   10.233.59.48   <none>        3306/TCP,33060/TCP,6446/TCP,6448/TCP,6447/TCP,6449/TCP,6450/TCP,8443/TCP   6h55m
service/django-mysql-instances   ClusterIP   None           <none>        3306/TCP,33060/TCP,33061/TCP                                               6h55m

NAME                                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/longhorn-iscsi-installation   7         7         7       7            7           <none>          51d

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/django-mysql-router   1/1     1            1           6h55m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/django-mysql-router-696545f47b   1         1         1       6h55m

NAME                            READY   AGE
statefulset.apps/django-mysql   3/3     6h55m

When everything is running, you can access the zero instance of the MySQL pod with:

kubectl exec -it pod/django-mysql-0 -c mysql -- /bin/bash
bash-4.4$ mysqlsh -u root -p

CREATE DATABASE IF NOT EXISTS todo_db;
USE todo_db;
CREATE TABLE IF NOT EXISTS Todo (task_id int NOT NULL AUTO_INCREMENT, task VARCHAR(255) NOT NULL, status VARCHAR(255), PRIMARY KEY (task_id));
INSERT INTO Todo (task, status) VALUES ('Hello','ongoing');

Enter in the password you defined in the innodb-values.yaml and you can now create a database, tables, and populate table entries. If you exec into one of the other MySQL pods, the information will be there as well, but will be read-only.

there are other customizations, like changing the number of replicas, the size of the PVs used, etc.

You can reverse the process, by first deleting the MySQL InnoDBCluster:

helm delete django-mysql

Wait until the pods are gone (it takes a while), and then delete the MySQL operator:

helm delete django-mysql-operator -n mysql-server

That should get rid of everything, but if, not here are other things that you can delete. Note: My storage class, Longhorn, is set to retain the PVs, so they must be manually deleted (I can’t think of an easier way):

kubectl delete sa default -n mysql-operator
kubectl delete sa mysql-operator-sa -n mysql-operator

kubectl delete pvc datadir-django-mysql-0
kubectl delete pvc datadir-django-mysql-1
kubectl delete pvc datadir-django-mysql-2
kubectl delete pv `kubectl get pv -A -o jsonpath='{.items[?(@.spec.claimRef.name=="datadir-django-mysql-0")].metadata.name}'`
kubectl delete pv  `kubectl get pv -A -o jsonpath='{.items[?(@.spec.claimRef.name=="datadir-django-mysql-1")].metadata.name}'`
kubectl delete pv  `kubectl get pv -A -o jsonpath='{.items[?(@.spec.claimRef.name=="datadir-django-mysql-2")].metadata.name}

I would like to figure out how to create a database and user, as part of the pod creation process, rather than having to exec into the pod and use mysql or mysqlsh apps.

I’d really like to be able to specify a secret for the root password, instead of including it into a vales.yaml file.


Copyright 2017-2024. All rights reserved.

Posted February 25, 2024 by pcm in category "bare-metal", "Kubernetes", "Raspberry PI