May 13

Cluster Upgrade – Challenge

With my cluster running a kubespray version around 2.23.3 and kubernetes 1.28.2, I wanted to give a try at updating my cluster, as there were newer versions available. There were all sorts of problems along the way, so I’ll try to cover what I did, and what (finally) worked.

For reference, my cluster has longhorn storage, prometheus/grafana/loki, metalLB, nginx-ingress, and velero installed, as well.

But, before doing anything, I decided to move things around a bit in my directory structures, so that I didn’t have git repos inside of my ~/workspace/picluster git repo. I created a ~/workspace/kubernetes and placed several directories as peers in that area:

kubernetes
├── grafana-dashboards-kubernetes
├── ingress
├── kubespray
├── mysql
├── nginx-ingress
├── picluster
└── velero

The rest of the components remained in the picluster area:

kubernetes/picluster
├── inventory
├── longhorn
├── metallb
├── minio
├── minio-k8s
├── monitoring
└── playbooks

With this setup, I proceeded to identify what kubespray version to upgrade to, and whether or not this was a multi-version upgrade or not. I found that the latest release tag was 2.24.0, but there were many more commits since then, so I created a tag at my current version (0f243d751), checked out and created a tag at the desired version (fdf5988ea).

Next, I wanted to make sure that all the tools I’m using match what Kubespray is expecting for the commit that I’m using. There is a requirements.txt file that calls out all the versions. I used ‘poetry show’ to see what versions I had, and then used ‘poetry add COMPONENT==VERSION’ with a version to make sure that there were compatible versions. For example:

poetry add ansible==9.5.1

I copied the sample inventory area into my ~/workspace/kubernetes/picluster/inventory area and merged in my existing hosts.yaml, so that I had any customizations that were originally made in k8s-cluster.yml).

With this, I was ready to go to the kubespray directory and do the upgrade using…

cd ~/workspace/kubernetes/kubespray
ansible-playbook upgrade-cluster.yml -b -i ../picluster/inventory/mycluster/hosts.yaml -u ${USER} -v --private-key=~/.ssh/id_ed25519 -e upgrade_cluster_setup=true

Initially, I saw that the calico-node pods were stuck in a crash loop…

calico-node: error while loading shared libraries: libpcap.so.0.8: cannot open shared object file: No such file or directory

It turns out that the 2.24.0+ release of kubespray uses calico v3.72.2, which has issues on arm64 processors. The choice was to go to v3.72.0, which apparently has a memory leak, or go to v3.72.3, where the problem with the library was fixed. I decided to do the later, but when I overrode calico_version, the upgrade failed, because there is no checksum for that version.

I found out that in the kubespray area, there is a scripts directory, with a download_hash.sh script, which would read the updated calico_version in ./roles/kubespray-defaults/defaults/main/download.yml and update the roles/kubespray-defaults/defaults/main/checksums.yml file. Well, it wasn’t as easy as that, because I was using a MacBook and the grep command does not have a -P (perl) option, used in the script. So…

I copied the Dockerfile to HashMaker.Dockerfile, and trimmed it to this:

# syntax=docker/dockerfile:1

FROM ubuntu:22.04@sha256:149d67e29f765f4db62aa52161009e99e389544e25a8f43c8c89d4a445a7ca37

ENV LANG=C.UTF-8 \
DEBIAN_FRONTEND=noninteractive \
PYTHONDONTWRITEBYTECODE=1

WORKDIR /kubespray

# hadolint ignore=DL3008
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt-get update -q \
&& apt-get install -yq --no-install-recommends \
curl \
python3 \
python3-pip \
sshpass \
vim \
openssh-client \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/log/*

RUN --mount=type=bind,source=requirements.txt,target=requirements.txt \
--mount=type=cache,sharing=locked,id=pipcache,mode=0777,target=/root/.cache/pip \
pip install --no-compile --no-cache-dir -r requirements.txt \
&& find /usr -type d -name '*__pycache__' -prune -exec rm -rf {} \;

SHELL ["/bin/bash", "-o", "pipefail", "-c"]

COPY scripts ./scripts

I copied scripts/download_shas.sh to scripts/download_shas_pcm.sh and made these changes (as inside the container there is no git repo:

9c9
< checksums_file="$(git rev-parse --show-toplevel)/roles/kubespray-defaults/defaults/main/checksums.yml"
---
> checksums_file="./roles/kubespray-defaults/defaults/main/checksums.yml"
11c11
< default_file="$(git rev-parse --show-toplevel)/roles/kubespray-defaults/defaults/main/main.yml"
---
> default_file="./roles/kubespray-defaults/defaults/main/main.yml"

With these changes, I did the following to build and run the container, where I could run the scripts/download_hash_pcm.sh script to update the checksum.yml file with the needed checksums…

docker buildx build --platform linux/arm64 -f HashMaker.Dockerfile -t hashmaker:latest .

docker run --rm -it --mount type=bind,source="$(pwd)"/roles,dst=/kubespray/roles --mount type=bind,source="${HOME}"/.ssh/id_ed25519,dst=/root/.ssh/id_ed25519 hashmaker:latest bash
./scripts/download_shas_pcm.sh
exit

(Yeah, I could have invoked the script instead of running bash and then invoking the script inside the container).

With this one would think that we are ready to do the upgrade. Well, I tried, but I hit some other issues…

  • Some nodes were updated to 1.29.3 kubernetes, but some were still at 1.28.2
  • The prometheus/grafana pods were in a crash loop, complaining that there were multiple default datasources.
  • Longhorn was older 1.5.3, and I figured it would be simple to helm upgrade to 1.6.1 – it wasn’t

Someone on Slack said that I need to do the kubespray upgrade with the “-c upgrade_cluster_setup=true” added. I did that, but it did not work and I still have three nodes with 1.29.3 and four with 1.28.2.

I found the problem with the versions. On the four older nodes, at some point kubeadm and/or kubelet were installed (as Ubuntu package). As a result, there was the newer /usr/local/bin/kubelet (v1.29.3), and the package installed /usr/bin/kublet (v1.28.2). For systemd, in addition to the /etc/systemd/system/kubelet.service, which used the /usr/local./bin/kubelet in ExecStart, there was a kubelet.service.d directory with 10-kubeadm.conf file that used /usr/bin/kubelet in ExecStart. This one seemed to take precedence.

To resolve, I removed the Ubuntu kubeadm package, which depended on kubelet, and I removed the kubelet.service.d directory and reloaded systemd. My only guess is that at one point I tried installing kubeadm. Now, upgrades will show all nodes using the newer 1.29.3 kubernetes.

I got into real trouble with this one. I tried deleting pods, removing replicasets that were no longer in use, and then tried to helm upgrade kube-prometheus-stack. That caused even more problems, as the upgrade failed and now I had a whole bunch of failing pods and replicasets not ready. The Prometheus pods were complaining about multiple attachments to the same PV (I was using Longhorn storage). I couldn’t clear the errors and could remove PVCs. I’m not sure if the problem was that I didn’t use all the arguments that I used, when I initially installed Prometheus.

I tried updating Longhorn (pulling the 1.6.1 values.yaml, changing policy from Delete to Retain and type from ClusterIP to NodePort, and then helm update with the modified values.yaml), and that was a mess too. Crash loops, and replicasets not working.

I ended up deleting the cluster entirely. I was concerned that maybe there was an issue with upgrading in general, so I installed the older kubespray/kubernetes cluster, without installing any other components (Longhorn, Prometheus), and did an upgrade. Everything worked fine.

I need to retry this, maybe with the upgrade of Prometheus using the same args as install did. I’m also worried about the multiple attachment issue with the PV.

In the meantime, I wanted to trying updating Longhorn…

With the original, Longhorn was at 1.5.3, and 1.6.1 is available. I had tried a helm upgrade (after I had upgraded the cluster), and had all sorts of problems. So, I created a new cluster, with the latest Kubernetes, made sure everything was up, and then helm installed 1.5.3, using the modified values.yaml I had with Retain policy and NodePort:

helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --version 1.5.3 --values values-1.5.3.yaml

I then did a helm upgrade to 1.6.1…

helm upgrade longhorn longhorn/longhorn --namespace longhorn-system --version 1.6.1

There were some pods in crash loops, and items not ready. I deleted the older replicasets. It looked like the deployment had annotation for 1.6.1, but was still calling out an image of 1.5.3. Looking at Longhorn notes, I saw that I could use kubectl to upgrade, and even knowing that I did use Helm install/upgrade before, I decided to try it.

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.1/deploy/longhorn.yaml

There were a bunch of warnings when running the command, but all pods came up and the deployment showed 1.6.1 for the image.

I’m not sure if there was something wrong with doing the helm update, if it was because I was customizing the values.yaml files, or if it was because I was using NodePort. With the kubectl apply, the type was set to clusterIP.

I’ve got more research to do here to isolate this issue.

I tested installing a 1.28.2 cluster, and then upgraded to 1.29.3 (doing just control plane nodes and etcd node first, and then all the worker nodes). Every pod was up and running, daemonsets/replicasets/deployments were all working, and things were looking pretty good.

There were some pre-upgrade replicatesets that were present (no needed/available instances), so I deleted them. I did a snapshot and backup of a Longhorn volume and that worked as well. I do see two problems so far.

First, under Grafana, the data sources were gone. I could not modify the Loki instance (as built-in), but I created another one. The original was giving connection refused errors. I think that the IP it uses, is the old one. There also was no Prometheus data source. I created one, and used the cluster IP and it works as well.

Second, I tried to do a backup of the Kubernetes cluster using Velero, and it failed. I tried viewing the log, but there was none. When checking ‘velero backup-location get’, it shows that the backup location is not available. It seems like various components are using older IPs/ports?

PROBLEM FOUND… It appears that when an upgrade occurs and the coredns version has changed, a new deployment, replicaset, service, and pods are created with the new version AND they get a new nameserver IP (10.133.0.10). However, the existing pods (and new ones created) are still referring to the old nameserver IP (default is 10.133.0.3). There is a service for that old nameserver IP, but it is not resolving addresses. If you do nslookup and specify the new nameserver IP, it will work, but that doens’t help everything that is running or new pods created, which are using the old nameserver IP.

WORKAROUND: If an install (cluster.yml) is done again, using the exact same settings, the first DNS service becomes active again. One can then delete the newly created service, and the unused replicasets. I tried repeating the upgrade, but that did not resolve the issue.

There does appear to be a download of the new coredns and restart of the systemd-resolved service. I don’t know if there is some mechanism to switch pods to use the new IP or if somehow the new service should have replaced the original and use the same IP.

After messing with things over a few weeks I found out quite a bit of things…

CoreDNS: I see that with the newer Kubespray master branch versions, they now have checksums for coredns 3.72.3. AS a result, I don’t need to go through the contortions of creating my own branch of Kubespray and building the checksums or coredns 3.72.3. I just picked a newer commit of Kubespray (not the current tagged version, as it still did not have the checksums for coredns 3.72.3.

Upgrading with CoreDNS changes: I found out that with the newer Ubuntu versions the kernels actually have the “dummy” kernel module. I see it in the current 6.5.0-1015-raspi kernel, and I think it was in 1013 and 1014. The implication of this is that, I was unable, in the past, to enable node local DNS in Kubespray, because this module was needed. After updating the OS on my nodes to have this newer kernel, I could then run Kubespray installs and upgrades with ‘enable_nodelocaldns’ setting and now upgrades had a working DNS, even when the version of coredns changed. There were some replicasets that remained and were not active, but the upgrades are working.

Scheduling Disabled: I was seeing several issues when doing upgrades. In one case, I found that a worker node status that was “Ready”, but had “SchedulingDisabled” indicated. I did a “kubectl uncordon NODENAME” and that enabled scheduling. Not sure why it was not completely upgraded.

Upgrading single node: I found that with Kubespray, you can use the command line argument on upgrade (and other commands) –limit “NODE1,NODE2,NIODE3”, to limit the nodes that are affected by the command to one or more that are specified in the limit clause. However, when I did an upgrade, specifying ONLY a worker node, the process failed at this step:

TASK [kubernetes-apps/network_plugin/multus : Multus | Start resources] ********
fatal: [niobe -> {{ groups['kube_control_plane'][0] }}]: FAILED! => {"msg": "Error in jmespath.search in json_query filter plugin:\n'ansible.vars.hostvars.HostVarsVars object' has no attribute 'multus_manifest_2'"}

The problem is, that I don’t have Multus enabled! It turns out that there is a bug in Kubespray, such that you need to have a control plane node included in the limit clause, so that it will parse that Multus is disable and will not attempt to start it up on the worker node. I just re-ran the upgrade specifying one control plane node (already upgraded) and the worker node I wanted to update..

Node name changes: OK, this was stupid. I named my nodes after characters from the movie “The Matrix” (Apoc, Cypher, Morpheus,…). Since the original install, I’ve been playing with updating Kubespray versions, updating Kubernetes, installing things like Prometheus and Longhorn, and working through the problem I had with CoreDNS version changing during upgrades. Recently, I realized that one of my worker nodes was actually named incorrectly. It was “niobi” and not “niobe”. I changed my inventory and rename the hostname on the node. At one point, I decided to retest upgrades (with the node local DNS enabled). I did this by checking out tags that I had created for my repo and the Kubespray repo, performing a clean install, updating the repos to newer tags or the latest commit, updating the Poetry environment so that the correct tool versions were used with the Kubespray version I was trying, and then doing an upgrade. The upgrade was failing on node “niobe”, and it took me a while to realize that when I did the install, the node was named “niobi”, but when I did the upgrade, it was named “niobe” (with the same IP). The (simple) fix, was to do fix the hostname in the inventory, before doing the initial install.

In the future, I think it is probably best to do the kubernetes/kubespray update separate from other components. In addition, I think the update should be done a node at a time, starting with control plane nodes, and then worker nodes. Kubespray does have a limit option to restrict to a node. They say to run facts.yml to update info on all nodes, update control plane/etcd nodes, and then do worker nodes:

ansible-playbook playbooks/facts.yml -b -i ../picluster/inventory/mycluster/hosts.yaml -u ${USER} -b -v --private-key=~/.ssh/id_ed25519

ansible-playbook upgrade-cluster.yml -b -i ../picluster/inventory/mycluster/hosts.yaml -e kube_version=v1.29.3 --limit "kube_control_plane:etcd" -u ${USER} -b -v --private-key=~/.ssh/id_ed25519

ansible-playbook upgrade-cluster.yml -b -i ../picluster/inventory/mycluster/hosts.yaml -e kube_version=v1.29.3 --limit "morpheus:niobi:switch" -u ${USER} -b -v --private-key=~/.ssh/id_ed25519

I used this on a re-try of the upgrade and the facts and control plane/etc steps worked fine, but I hit an error in the downloading step for the worker nodes. Just note that, with the current Kubespray, you probably should include one control plane node, when upgrading one or more worker nodes, so that the configuration is handled correctly.


Copyright 2017-2024. All rights reserved.

Posted May 13, 2024 by pcm in category "bare-metal", "Kubernetes", "Raspberry PI