December 11

Updating Kubernetes nodes’ OS

With nine nodes in my cluster right now, each running Ubuntu 24.04, I want to ensure that the latest updates are present on the nodes.

I know I can remove the node from the cluster, update the OS, and then re-add the node, but I’m hoping there is an easier way.

I asked ChatGPT, and the two best methods suggested were to create a custom Ansible playbook to do the updates, or to use the Kubernetes Cluster API. The Cluster API would take a lot of effort to setup, so I’m opting for the playbook approach.

The steps suggested are:

  • cordon the node

  • drain the node

  • apply apt updates

  • reboot

  • wait for node to be ready

  • uncordon

ChatGPT provided an example playbook with these steps. For my cluster, however, which uses Longhorn storage, I want to change the node drain policy before the updates are done, so that the drain command doesn’t timeout, waiting for any sole replica. After the upgrade, the drain mode can be restore.

The revised playbook (rolling_apt_upgrade.yaml) looks like this:

---
- hosts: kube_node
serial: 1
become: yes

pre_tasks:
- name: "Set Longhorn node-drain-policy BEFORE rolling updates"
command: >
kubectl -n longhorn-system patch setting node-drain-policy
--type=merge -p '{"value":"block-for-eviction-if-contains-last-replica"}'
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true

tasks:
- name: Cordon the node
command: kubectl cordon {{ inventory_hostname }}
delegate_to: "{{ groups['kube_control_plane'][0] }}"

- name: Drain the node
command: >
kubectl drain {{ inventory_hostname }}
--ignore-daemonsets
--delete-emptydir-data
--grace-period=30
delegate_to: "{{ groups['kube_control_plane'][0] }}"

- name: Apply apt upgrades
apt:
upgrade: dist
update_cache: yes

- name: Reboot the node
reboot:

- name: Wait for node to return to Ready
command: kubectl get node {{ inventory_hostname }} -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}'
register: node_ready
retries: 40
delay: 10
until: node_ready.stdout == "True"
delegate_to: "{{ groups['kube_control_plane'][0] }}"

- name: Uncordon the node
command: kubectl uncordon {{ inventory_hostname }}
delegate_to: "{{ groups['kube_control_plane'][0] }}"

post_tasks:
- name: "Restore Longhorn node-drain-policy AFTER rolling updates"
command: >
kubectl -n longhorn-system patch setting node-drain-policy
--type=merge -p '{"value":"block-if-contains-last-replica"}'
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true

From my ~/workspace/picluster area, with the playbook in the sub-dir playbooks, I invoked with:

ansible-playbook -i inventory/mycluster/hosts.yaml playbooks/rolling_apt_upgrade.yaml

I was having issues on one node, where it was not becoming ready. What I saw was that the node did not know the IP of the API (lb-apiserver.kubernetes.local), and to resolve I had to add an entry to /etc/hosts mapping the IP to that name. I guess the problem was that, on reboot, kubelet is not up, so it cannot get the DNS info for the API. I don’t have a separate DNS server.

I added an ansible playbook to do this in playbooks/update_host_tmpl.yaml and it can be run with –limit to specify the node, if desired. Adding this to the node prep steps in Part IV of my series on Raspberry PI clusters.

 

 

 

Category: bare-metal, Kubernetes, Linux, Raspberry PI | Comments Off on Updating Kubernetes nodes’ OS
February 19

Lazyjack – Provisioning bare-metal for IPv6 Kubernetes

v1.4

I’ve been experimenting with IPv6, Kubernetes, and Istio using Docker-In-Docker. One difficulty I’ve been having is accessing the cluster externally, as the whole cluster is running in docker containers on one VM.

I decided to try to get Kubernetes running on multiple bare-metal nodes. Well, this turned out to be quite challenging, as there are many configuration settings and tweaks needed to make this work.

Not wanting to have to endure that agony, each time I set things up, or spend hours with others’ who want to do the same thing, I decided to write a small Go app to automate this setup. Lazyjack is the culmination of that effort.

You can find details on how to set up and use Lazyjack from the Github repo, but I’ll run through the steps here, using a two system setup I have in a lab.

 

Step 1: Get Everything Needed

Hardware: I already had two Ubuntu 16.04 systems, each with a pair of interfaces, one for SSH access to the box for provisioning, and one connected to an L2 switch, which would be used for the “management” network for Kubernetes. This second interface was new, and didn’t have any configuration on it.

Both boxes have access to the Internet (V4, using NAT in the lab), so that I can access repos and pull down stuff.

Update: If you want to be able to access remote IPv6 sites, without doing NAT64 (and using their IPv4 address), enable IPv6 and forwarding on each node, with an IPv6 address on the main interface. If using SLAAC, ensure system_ra=2 for the main interface, using sysctl.

Software: Being development systems, docker 17.03.2-ce and Go 1.9.2 were installed. I think these systems already had openssl installed. Likewise, Kubernetes was installed (sudo apt-get install kubernetes kubelet kubeadm) on these systems.

Update: You should install CNI v0.7.1+ on the systems, otherwise, there may be issues with IPv6 support (e.g. ip6tables configuration).

Lazyjack: The easiest way is to download the latest release, untar, and place the executable in your system path on each system.  For example, for the first release:

mkdir ~/bare-metal
cd ~/bare-metal
wget https://github.com/pmichali/lazyjack/releases/download/v1.0.0/lazyjack_1.0.0_linux_amd64.tar.gz
tar -xzf lazyjack_1.0.0_linux_amd64.tar.gz
sudo cp lazyjack /usr/local/bin

 

Note: The tar file name may be different, based on the version of lazyjack you use.

Alternately, you can get the repo:

go get github.com/pmichali/lazyjack

build it:

cd ~/go/src/github.com/pmichali/lazyjack
go build cmd/lazyjack.go

 

And then move the executable to your system path on each system. The sample-config.yaml can be used as a template for the configuration.

 

Step 2: Create a Configuration File

I’m lazy, on the system I was going to use as the master node, I just took the sample-config.yaml, and renamed it config.yaml. That file has the following network definitions already set up:

Management network –  fd00:20::/64

Support network – fd00:10::/64

Pod network – fd00:40:0:0:X/80

Service network – fd00:30::/110

DNS64 network –  fd00:64:ff9b::/96

The only thing I needed to do was identify the hostnames I was using, and the interface name for the interface that would be used for the management network. The definitions I used were:

topology:
    bxb-c2-77:
        interface: "enp10s0"
        opmodes: "master dns64 nat64"
        id: 2
    bxb-c2-79:
        interface: "enp10s0"
        opmodes: "minion"
        id: 3
support_net:

 

As you can see, bxb-c2-77 will be the master node, and it will have dns64 and nat64 containers running on it, to support IPv6 on the cluster. The sole minion is bxb-c2-79, but you can clearly more nodes listed here. Likewise, you can use a separate node for the dns64 and nat64 services.

Each node has a unique (and arbitrary), ID from 2-65535 (but why use huge numbers?).

Update: You can configure DNS64 to allow use of IPv6 addresses, so that we can directly access external sites that support IPv6:

dns64:
    allow_ipv6_use: true

 

With that, we are ready to get things rolling…

 

Step 3: Initialize For Kubernetes

On the master (bxb-c2-77 in my case), run lazyjack (I’m assuming it is in your path) with the init command (from the area where the config.yaml file is, so that you don’t have to specify the location):

sudo lazyjack init

 

Yes, you need to run all lazyjack commands as root, because privileged access is needed to various resources. If you don’t run as root, you’ll see a permission denied error.

If you are curious as to what it does, you can add the “-v 4” option, before the “init” argument.

This command will create needed certificates and keys needed for Kubernetes, and will place information into the configuration file (config.yaml), with a .bak preserving the previous version (multiple runs of this command will overwrite that, BTW). Also, the file will be, obviously, owned by root, but the permission changed to 0777, so that you can edit the file, if needed later.

You must copy the configuration file to all other nodes, now that it has the updated information.

 

Step 4: Prepare the Systems

Running lazyjack with the “prepare” command, will get a system ready for running Kubernetes. Run this command on each node.

Note: this command will generate a kubeadm.conf file in the work area (default /tmp/lazyjack) of the master node. If desired, you can customize this file to specify different settings desired for the cluster. For example, you can change the kubernetesVersion line, to pick a different version than 1.9.0 that was generated.

 

Step 5: Cluster Bring-up – Master First

On the master, run lazyjack with the “up” command. This will take a few minutes, as it starts up KubeAdm. Once completed, you can setup kubectl by doing:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

On subsequent runs, I usually do a “rm -rf ~/.kube”, prior to these commands.

Now, you can run “kubectl get nodes -o wide” to see that this node is up, and “kubectl get pods –all-namespaces -o wide”, to see when Kubernetes is fully up. You’ll see something like this:

NAMESPACE   NAME                              READY  STATUS   RESTARTS AGE IP                NODE
kube-system etcd-bxb-c2-77                    1/1    Running  0        2m  fd00:20::2        bxb-c2-77
kube-system kube-apiserver-bxb-c2-77          1/1    Running  0        2m  fd00:20::2        bxb-c2-77
kube-system kube-controller-manager-bxb-c2-77 1/1    Running  0        2m  fd00:20::2        bxb-c2-77
kube-system kube-dns-dcf744547-k56t2          3/3    Running  0        3m  fd00:40::2:0:0:29 bxb-c2-77
kube-system kube-proxy-m9z9m                  1/1    Running  0        3m  fd00:20::2        bxb-c2-77
kube-system kube-scheduler-bxb-c2-77          1/1    Running  0        2m  fd00:20::2        bxb-c2-77

 

You can untaint the master, if you want to be able to create pods on that node.

 

Step 6: Cluster Bring-up – Minions

After you are sure that the master is completely up (all pods and services running), go onto each of the minion nodes, and run the same “up” command. The command should complete quickly, and you can check the status of the node, using the “kubectl get nodes” command on the master. It does take a bit for the minions to become ready. Likewise, you can use the “kubectl get pod” output to see that a proxy is running for each minion.

Note: The reason we don’t do all of the steps on one node, is because lazyjack will setup static routes to other nodes, and the interfaces must be set up on those systems first.

 

Step 7: Enjoy!

That’s it. You can now play with Kubernetes, creating pods that will have IPv6 addresses, and who should be able to ping6 to other pods on other nodes and have external access to the Internet.

 

Step 8: Cleanup

You can run the “down” and then “clean” commands on each minon, and then the master to clean things up.

 

Troubleshooting

Problems Bringing Up a Minion

If the “up” command on a minion fails, you can retry it with “-v 4” to see verbose output. Then, you can manually perform some of the steps that are shown. In one case, I had kubeadm join failing and when running manually, I saw:

c2@bxb-c2-78:~/bare-metal$ sudo kubeadm join --token ...
[preflight] Running pre-flight checks.
 [WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Some fatal errors occurred:
 [ERROR Port-10250]: Port 10250 is in use

 

This occurs when the kubelet service is already running and using that port.  You can stop the service, and then do the “lazyjack up” command or, just run the “down” and then “up” command and that should reload the daemon, and restart the service.

 

 

Category: bare-metal, Go, Istio, Kubernetes, Linux | Comments Off on Lazyjack – Provisioning bare-metal for IPv6 Kubernetes