February 25

MySQL With Replicas on Raspberry PI Kubernetes

I Know that I’ll need a database for several projects that I want to run on my Raspberry PI based Kubernetes cluster, so I did some digging for blogs and tutorials on how to set this up.

I found some general articles on how to setup MySQL, and even one that talked about setting up multiple pods so that there are replicas for the database. Cool!

However, I had difficulty in finding information on doing this with ARM64 based processors. I found this link on how to run an MySQL operator and InnoDB with multiple replicas for ARM64 processors, but it had two problems. First, it used a fork of the upstream repository for the MySQL operator and had not been updated in over a year, so images (which were in a repo in that account) were older. Second, it made use of a “mysql-router” image, from a repo in the same account, but it didn’t exist!

So, I spent several days, trying to figure out how to get this to work, and then how to use it with the latest images that are available for ARM64 processors. I could not figure out how to build images from a forked repo, as it seems that the build scripts are setup for Oracle’s CI/CD system and there is no documentation on how to manually build. In any case, using information from this forked repo and after doing a lot of sleuthing, I have it working…

The MySQL Operator repo contains both the operator and the innodbcluster components. They are designed to work with AMD64 based processors, and there is currently no ARM64 support configured. When I asked on the MySQL operator Slack channel as of the February 2024, they indicated that the effort to support ARM64 has stalled, so I decided to figure out how to use this repo, customizing it to provide the needed support.

I used Helm versus manifests, to set things up. First, I setup an area to work and prepared to access my Raspberry PI Kubernetes cluster

cd ~/workspace/picluster
poetry shell

mkdir mysql
cd mysql

Add the mysql-operator repo:

helm repo add mysql-operator https://mysql.github.io/mysql-operator/
helm repo update

The operator chart can now be installed, but we need to tell it to use an ARM64 image of the Oracle community version of the operator. Here are the available operator versions to choose from. I’ll use the 8.3.0-2.1.2-aarch64 version:

helm install django-mysql-operator mysql-operator/mysql-operator -n mysql-operator --create-namespace --set image.tag="8.3.0-2.1.2-aarch64"

This creates a bunch of resources and most noticeable, a deployment, replica set, and pod for the operator, in the mysql-operator namespace. The name, django-mysql-operator’ is arbitrary. Check to make sure everything is running with:

kubectl get all -n mysql-operator
NAME                                  READY   STATUS    RESTARTS   AGE
pod/mysql-operator-6cc67fd566-v64dp   1/1     Running   0          7h21m

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/mysql-operator   ClusterIP   10.233.19.231   <none>        9443/TCP   7h21m

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql-operator   1/1     1            1           7h21m

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-operator-6cc67fd566   1         1         1       7h21m

Next, we can install the helm chart for the MySQL InnoDBCluster. Again, we need to select from available ARM64 versions for the community operator, community router (should be able to use same version), and MySQL server (pick a tag that supports both AMD64 and ARM64 – I used 8.0). Since there are so many changes, we’ll use a values.yaml file, instead of command line –set arguments.

We can get the current values.yaml file with:

helm show values mysql-operator/mysql-innodbcluster > innodb-values.yaml

In that file, you can see the defaults that would be applied, like number of replicas, and can do some additoinal customizations too. In all cases, if you use a values.yaml file, you MUST provide a root password. For our case, we select to use self signed certificates, and specify arm images for the container, sidecar, and a bunch of init containers. Here are just the changes needed, using the versions I chose at the time of this writing:

cat innodb-values.yaml
credentials:
  root:
    password: "PASSWORD YOU WANT"
# routerInstances: 1
# serverInstances: 3
tls:
  useSelfSigned: true
podSpec:
  initContainers:
    - name: fixdatadir
      image: container-registry.oracle.com/mysql/community-operator:8.3.0-2.1.2-aarch64
    - name: initconf
      image: container-registry.oracle.com/mysql/community-operator:8.3.0-2.1.2-aarch64
    - name: initmysql
      image: mysql/mysql-server:8.0
  containers:
    - name: mysql
      image: mysql/mysql-server:8.0
    - name: sidecar
      image: container-registry.oracle.com/mysql/community-operator:8.3.0-2.1.2-aarch64
router:
  podSpec:
    containers:
      - name: router
        image: container-registry.oracle.com/mysql/community-router:8.3.0-aarch64

Using this file, we can create the pods for the three MySQL pods using the command:

helm install django-mysql mysql-operator/mysql-innodbcluster -f innodb-values.yaml

It’ll create a deployment, replica, a stateful set, services, three pods, along with three PVs and PVCs, and a new innodbcluster resource and instance. The name provided ‘django-mysql’, will be the prefix for resources. They will take a while to come up, so have patience. Once the pods and statefulset are up, you see a router pod created and started:

$ kubectl get all
NAME                                       READY   STATUS    RESTARTS       AGE
pod/django-mysql-0                         2/2     Running   0              6h55m
pod/django-mysql-1                         2/2     Running   0              6h55m
pod/django-mysql-2                         2/2     Running   0              6h55m

NAME                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                                    AGE
service/django-mysql             ClusterIP   10.233.59.48   <none>        3306/TCP,33060/TCP,6446/TCP,6448/TCP,6447/TCP,6449/TCP,6450/TCP,8443/TCP   6h55m
service/django-mysql-instances   ClusterIP   None           <none>        3306/TCP,33060/TCP,33061/TCP                                               6h55m

NAME                                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/longhorn-iscsi-installation   7         7         7       7            7           <none>          51d

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/django-mysql-router   1/1     1            1           6h55m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/django-mysql-router-696545f47b   1         1         1       6h55m

NAME                            READY   AGE
statefulset.apps/django-mysql   3/3     6h55m

When everything is running, you can access the zero instance of the MySQL pod with:

kubectl exec -it pod/django-mysql-0 -c mysql -- /bin/bash
bash-4.4$ mysqlsh -u root -p

CREATE DATABASE IF NOT EXISTS todo_db;
USE todo_db;
CREATE TABLE IF NOT EXISTS Todo (task_id int NOT NULL AUTO_INCREMENT, task VARCHAR(255) NOT NULL, status VARCHAR(255), PRIMARY KEY (task_id));
INSERT INTO Todo (task, status) VALUES ('Hello','ongoing');

Enter in the password you defined in the innodb-values.yaml and you can now create a database, tables, and populate table entries. If you exec into one of the other MySQL pods, the information will be there as well, but will be read-only.

there are other customizations, like changing the number of replicas, the size of the PVs used, etc.

You can reverse the process, by first deleting the MySQL InnoDBCluster:

helm delete django-mysql

Wait until the pods are gone (it takes a while), and then delete the MySQL operator:

helm delete django-mysql-operator -n mysql-server

That should get rid of everything, but if, not here are other things that you can delete. Note: My storage class, Longhorn, is set to retain the PVs, so they must be manually deleted (I can’t think of an easier way):

kubectl delete sa default -n mysql-operator
kubectl delete sa mysql-operator-sa -n mysql-operator

kubectl delete pvc datadir-django-mysql-0
kubectl delete pvc datadir-django-mysql-1
kubectl delete pvc datadir-django-mysql-2
kubectl delete pv `kubectl get pv -A -o jsonpath='{.items[?(@.spec.claimRef.name=="datadir-django-mysql-0")].metadata.name}'`
kubectl delete pv  `kubectl get pv -A -o jsonpath='{.items[?(@.spec.claimRef.name=="datadir-django-mysql-1")].metadata.name}'`
kubectl delete pv  `kubectl get pv -A -o jsonpath='{.items[?(@.spec.claimRef.name=="datadir-django-mysql-2")].metadata.name}

I would like to figure out how to create a database and user, as part of the pod creation process, rather than having to exec into the pod and use mysql or mysqlsh apps.

I’d really like to be able to specify a secret for the root password, instead of including it into a vales.yaml file.

Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on MySQL With Replicas on Raspberry PI Kubernetes
February 12

S3 Storage In Kubernetes

In Part VII: Cluster Backup, I set up Minio running on my laptop to provide S3 storage that Velero can use to backup the cluster. In this piece, Minio will be setup “in cluster”, using Longhorn. There are a few links discussion how to do this. I didn’t try this method, but did give this a go (with a bunch of modifications), and am documenting it here.

For starters, I’m using the Helm chart for Minio from Bitnami:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

We’ll grab the configuration settings so that they can be modified:

mkdir -p ~/workspace/picluster/minio-k8s
cd ~/workspace/picluster/minio-k8s
helm show values bitnami/minio > minio.yaml

Create a secret to be used to access Minio:

kubectl create secret generic minio-root-user --namespace minio --from-literal=root-password="DESIRED-PASSWORD" --from-literal=root-user="minime"

In minio.yaml, set auth existingSecret to “minio-root-user” so that the secret will be used for authentication, set defaultBucket to “kubernetes”, and set service type to “NodePort”. The Minio deployment can be created:

helm install minio bitnami/minio --namespace minio --values minio.yaml

The Minio console can be accessed by using a browser, a node’s IP and the NodePort port:

kubectl get svc -n minio
NAME    TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
minio   NodePort   10.233.60.69   <none>        9000:32602/TCP,9001:31241/TCP   78m

In this case, using a one of the node’s (10.11.12.190) http://10.11.12.190:31241. Use the username and password you defined above, when creating the secret.

Now, we can install Velero, using the default bucket we had created (one could create another bucket from the Minio UI), credentials file, and cluster IP for the Minio service:

cat minio-credentials
[default]
aws_access_key_id = minime
aws_secret_access_key = DESIRED-PASSWORD

velero install \
     --provider aws \
     --plugins velero/velero-plugin-for-aws:v1.8.2 \
     --bucket kubernetes \
     --secret-file minio-credentials \
     --use-volume-snapshots=false \
     --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://10.233.60.69:9000

The backup location can be checked (and should be available):

velero backup-location get
NAME      PROVIDER   BUCKET/PREFIX   PHASE       LAST VALIDATED                  ACCESS MODE   DEFAULT
default   aws        kubernetes      Available   2024-02-12 20:43:23 -0500 EST   ReadWrite     true

Finally, you can test the backup and restore of a single deployment (using the example from Part VII, where we pulled the velero repo, which has an example NGINX app):

kubectl create namespace nginx-example
kubectl create deployment nginx --image=nginx -n nginx-example

velero backup create nginx-backup --selector app=nginx
velero backup describe nginx-backup
velero backup logs nginx-backup

kubectl delete namespace nginx-example

velero restore create --from-backup nginx-backup
velero restore describe nginx-backup-20240212194128

kubectl delete namespace nginx-example
velero backup delete nginx-backup
velero restore delete nginx-backup

There is a Minio client, although it seems to be designed for use with a cloud based back-end or local installation. It has predefined aliases for Minio, and is designed to run and terminate on each command. Unfortunately, we need to set a new alias, so that it can be used with later commands. We can hack a way into use it.

First, we need to know the Cluster IP address of the Minio service, so that it can be used later:

kubectl get svc -n minio
NAME    TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
minio   NodePort   10.233.60.69   <none>        9000:32602/TCP,9001:31241/TCP   78m

We get the user/password, and then run the client so that an alias (using cluster IP 10.233.60.69, in this case) can be created and commands invoked.

export ROOT_USER=$(kubectl get secret --namespace minio minio-root-user -o jsonpath="{.data.root-user}" | base64 -d)
export ROOT_PASSWORD=$(kubectl get secret --namespace minio minio-root-user -o jsonpath="{.data.root-password}" | base64 -d)

kubectl run --namespace minio minio-client \
     --tty -i --rm --restart='Never' \
     --env MINIO_SERVER_ROOT_USER=$ROOT_USER \
     --env MINIO_SERVER_ROOT_PASSWORD=$ROOT_PASSWORD \
     --env MINIO_SERVER_HOST=minio \
     --image docker.io/bitnami/minio-client:2024.2.9-debian-11-r0 -- \
    /bin/bash
mc alias set myminio http://10.233.60.69:9000 $MINIO_SERVER_ROOT_USER $MINIO_SERVER_ROOT_PASSWORD 
mc admin info myminio
...
Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on S3 Storage In Kubernetes
February 3

More Power! Adding nodes to cluster

I’ll document the process I used to add two more Raspberry Pi 4s to the cluster that I’ve created in this series.

Preparing The PIs

With two new Raspberry PI 4s, PoE+ hats, SSD drives (2 TB this time), and two more UCTRONICS RM1U-3 trays each with an OLED display, power button, SATA Shield card, and USB3 jumper, I set out to assemble the trays and image them with Ubuntu.

Everything went well, assembling the trays with the Raspberry PIs. In turn, I connected a keyboard, HDMI display, Ethernet cable, and power adapter (as I don’t have PoE hub in my study). Once booted, I followed the steps in Part II of the series, however there were some issues getting the OS installed.

First, the Raspberry PI Imager program has been updated to support PI 5s, so there were multiple menus, tabbed fields, etc. I decided to connect a mouse to the Raspberry PI, rather then enter a maze of tabs and enters and arrows to try to navigate everywhere.

Second, when I went to select the Storage Device, the SSD drive was not showing up. I didn’t know if this was an issue with the UCTRONICS SATA Shield, the different brand of drive, the larger capacity, the newer installer, or the Raspberry PI itself. I did a bunch of different things to try to find out the root cause, and finally found out that to make this work, I needed to image the SSD drive using the Raspberry PI Imager on my Mac, using a SATA to USB adapter, and then place it into the UCTRONICS tray along with the Raspberry PI and it would then boot to the SSD drive.

Third, for one of the two Raspberry PIs, this still did not work, and I ended up installing the Raspberry PI OS on an SD card, update the EEPROM and bootloader, and then net booted the Raspberry PI Installer, and then I was able to get the Raspberry PI to boot from the SSD drive. Probably a good idea to update the EEPROM and bootloader to the latest anyway.

Initial Setup

Like done in Part II of the series, I picked IP addresses for the two units, added their MAC addresses into my router so that those IPs were reserved, added the host names to my local DNS server, and create SSH keys for each and used “ssh-copy-id” to copy those keys to all the other nodes and my Mac, and vice versa. Connectivity was all set.

I decided NOT to do the repartitioning mentioned in Part III, and instead leave the drive as one large 2TB (1.8TB actually) drive. My hope is that with Kubernetes, I can monitor problems, so if I see log files getting out of hand, I can deal with it, rather than having fixed paritions for /tmp, /var, /home, etc.  I did create a /var/lib/longhorn directory – not sure if Longhorn would create this automatically.

Node Prep

With SSH access to each of the PIs, I could run through the same Ansible scripts that were used to setup all the other nodes as outlined in Part IV. Before running the scripts, I added the two nodes (morpheus, switch) to the hosts.yaml file in the inventory as worker nodes. There are currently, three master nodes, and four worker nodes.

When running these ansible scripts, I specified both hosts at once, rather than doing one at a time. For example:

cd ~/workspace/picluster
ansible-playbook -i "morpheus,switch" playbooks/passwordless_sudo.yaml -v --private-key=~/.ssh/id_ed25519 --ask-become-pass
ansible-playbook -i "morpheus,switch" playbooks/ssh.yaml -v --private-key=~/.ssh/id_ed25519
...

Now that the nodes are ready, they can be added to the cluster. For a control plane node, the cluster.yaml script is used:

cd ~/workspace/picluster/kubespray
ansible-playbook -i ../inventory/mycluster/hosts.yaml -u ${USER} -b -v --private-key=~/.ssh/id_ed25519 cluster.yml

Then, on each node, restart the NGINX proxy pod with:

crictl ps | grep nginx-proxy | awk '{print $1}' | xargs crictl stop

In our case, these will be worker nodes, and would be added with these commands (using limit so other nodes are not affected:

ansible-playbook -i ../inventory/mycluster/hosts.yaml -u ${USER} -b -v --private-key=~/.ssh/id_ed25519 --limit=morpheus scale.yml
ansible-playbook -i ../inventory/mycluster/hosts.yaml -u ${USER} -b -v --private-key=~/.ssh/id_ed25519 --limit=switch scale.yml

These two nodes added just fine, with the Kubernetes version v1.28.5, just like the control plane node I added before (my older nodes are still v1.28.2, but not sure how to update them currently).

Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on More Power! Adding nodes to cluster