December 25

Part II: Preparing the Raspberry PI

For the very first PI4 that I bought, I got the CANA kit, so it had a plastic enclosure, power adapter with switch, and 126 GB SD card. With this system I connected a mouse, keyboard, and HDMI monitor, and used the Raspberry PI imager app on my MacBook and installed Ubuntu server (22.04) on the SD card. I booted up and made sure everything worked.

Since I’m using these UCTRONICS trays, I would follow these steps to partially assemble the unit. I’d connect the SSD drive’s SATA connector to the SATA shield card, and screw the SSD card to the side of the tray. Next, I installed the SD adapter into the SATA shield card. I aligned the Raspberry PI4 onto the posts and screwed it on using the threaded standoffs for the PoE+ hat. I inserted the SD adapter into the SD slot of the PI, connected the OLED and power switch cables from the SATA shield card to the front panel. Lastly, I would connect the USB jumper from the SATA shield card to the Raspberry PI4 card for SSD drive connections.

I reused the power supply from my CANA kit, and connected the display and keyboard to the Raspberry PI. I think I could connect power to either the USB-C connector on the PI or on the SATA shield card. An alternative would be to attach the PoE+ hat, but then I’d have to connect ethernet to the PoE+ powered hub, and that was down in the rack, where I didn’t have an HDMI monitor. So I skipped that part, as I was doing this in the study. I connected ethernet cable and powered on the unit. The RPI will display an install screen, where you can press SHIFT key to cause net boot.

This will download the installer image and then restart. You’ll eventually get an installer screen like what the PI imager has on the Mac/PC. It is easiest to use a mouse, but if you don’t have one, you can press the tab button to advance field, and enter to select.

You’ll want to select the model of Raspberry PI (4), select the OS (I used Ubuntu 23.10 64 bit – in the past it was 22.04), and then select the storage device.

You should see the SSD drive listed (Samsung 1TB drive in my case), and select it. Click the next button, and select to edit the configuration. Enter the host name, enabled SSH with password authentication (for now), and selected a username and simple password (for now). I set the time zone as well. Here is an (older installer) screen shot of some settings.

Lastly, click on SAVE, click on YES to use the changes, and then click on the WRITE button. Again, here is an older screen shot, where there was no PI model button, and the configuration (gear icon) was on the same screen.

While the installer was running, I checked on my router for the hostname, and made a DHCP reservation for the final desired IP I wanted for that system. I also created a HOSTNAME.home DNS entry for this node.

At one point, it rebooted and eventually displayed that cloud init was done. I pressed ENTER and was able to log in. I shut down the system, and then started it back up, so that it would pickup the desired IP address that I setup on my router.

To make setup easier, I created an SSH key on this system, so that I could SSH in from my MacBook and do all the rest from there, without needing the display and keyboard connected. I SSHed into the system, created a key with:

ssh-keygen -t ed25519

and then I copied the public key to all the other nodes and systems so that I can easily get into the system. I also added this new system to the ~/.ssh/config file that I use on other systems, so that I can ssh using the host name. I set the login password to what I really wanted. Make sure that you can ssh into each node, without using a password.

Rather than rely on a DHCP reservation, I changed the /etc/netplan/50-cloud-init.yaml to assign a static IP, set the router IP, and set DNS servers. For the first one, I used my router’s IP as I have it set up to use my Pi-Hole address blocker for primary DNS and a public DNS as a secondary one. This way, later, when I add Pi-Hole to Kubernetes, I can just change the IP on the router and every node will use that new DNS. Optionally, you can add a search clause. I tried “.home” thinking it would then resolve my hosts by hostname, but that isn’t working out. Here is an example 50-cloud-init.yaml:

network:
version: 2
renderer: networkd
ethernets:
eth0:
addresses:
- 10.11.12.198/24
routes:
- to: default
via: 10.11.12.1
nameservers:
addresses: [10.11.12.1, 208.67.220.220]
search: [.home]

This was done on each Raspberry PI, followed by “sudo netplan apply” to update the node. Just be sure to do this via the console, otherwise you’ll loose connection, if done from an ssh session.

Of course, there are several other things that need to be set up on a Raspberry PI, like:

  • Setting the domain name for the node.
  • Setting up the OLED display and power switch, since I’m using UCTRONICS tray.
  • Changing kernel settings.

However, since I’ll be using kubespray to provision the cluster, and kubespray uses ansible, there is the ability to use ansible playbooks to provision all the nodes in a more automated fashion on all nodes. This will be detailed in Part IV, but the next step (Part III) is to do some (optional) partitioning of the SSD drive.

On some older units, I went through all sorts of contortions, to install an OS on the SD card using the Raspberry PI imager on my MacBook, booting, updating the rpi-eeprom, setting things up to net boot. There is a bootconfig.txt file that can be used to update the bootload for enabling net boot. I had this entry in there:

BOOT_ORDER=0xf241          SD(1), USB(4), network(2), retry each (f).

Though I’ve seen 0x41 as well. It was really messy. This new loader is much easier.

I hit a case with some newer SSD drives (a Crucial 2TB BX500 drive), where the drive did not show up in the installer’s list of storage devices to select. To get around this I had to do the following…

I attached the SSD drive to my Mac, using a SATA III adapter, and used the Raspberry PI imager to image the disk (using the same settings as explained above).

Then, I attached the SSD drive to the Raspberry PI’s USB3 port (with the UCTRONICS I installed the drive in the tray, connected to SATA Shield card, and used the provided USB3 jumper. I made sure there was no SD card installed, connected an Ethernet cable and started up the Raspberry PI, making sure it booted from the SSD card.

I have had cases where I had to boot from the SD card running Raspberry PI OS (imaged on a Mac or via net booting the RPI), and run these commands, to update the OS, EEPROM, and bootloader…

sudo apt update
sudo apt full-upgrade
sudo rpi-update
sudo rpi-eeprom-update -d -a

I then rebooted, so that the new firmware was activated, and then tried to boot using the SD card and see if the SSD drive was visible (sudo fdisk -l), and then boot without the SD card, hoping it would boot from the SSD drive.

I also tried netbooting to the installer, and instead of choosing an OS, I chose Utility Apps, Bootloader, and picked the option to boot to USB. I wrote that to the SD storage device, it rebooted, and I checked to see if it booted to the SSD. If it did, I would shutdown and restart, without the SD card.

It was a bit of a mess trying to get it working. This happened recently, when I wanted to setup two more PI 4s. One I had to go through these hoops, and the other one worked, after I imaged the SSD on the Mac. I guess the PI4 bootloader should boot from USB, out of the box. For some reason, I hit one that did not.

Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on Part II: Preparing the Raspberry PI
December 25

Part I: Raspberry PI Kubernetes Cluster Goals

Currently, I have a few of old tower based Linux servers, running services (VPN, file server, Emby music server, a custom app for monitoring my photovoltaic system, etc). I had started to adapt several of these to run in containers, so that I could move them around, if a system failed, especially since the systems were getting quite old.

In addition, I started to buy some Raspberry PIs so that I had newer technology and hosts that used much less power than my old gear, and I could place these containers on the PIs.

Since I worked with Kubernetes development for several years, before retiring, I decided to build a cluster so I had a way to spread the workload, easily move pods around upon failures, monitor and manage the system, and scale it out as I get more Raspberry PIs.

For the initial design, I have five Raspberry PIs right now, though one is currently hosting a bunch of containers. Plan was to put four of them into service right now, and then once I have my containers migrated over to the cluster, I can add the fifth system. I just got a sixth one for Christmas, so I’ll be adding that in soon.

For the hardware, I’m using Raspberry PI 4s with 8 GB RAM (~$85 each), and I have the PoE+ Hats (~$28), so that I can power them off of the PoE based ethernet hub I have (LinkSys LGS116P 8 regular ports, 8 PoE ports ~$120). I purchased a bunch of Samsung 1 TB SSD drives (870 EVO ~$50). Probably should have gotten 2 TB or larger.

I found a really cool product from UCTRONICS (model B0B6TW81P6 ~$290), which consists of a 1U rack mounted enclosure that holds four Raspberry PI4s, each in a removable tray. There are two fans inside as well. Since I needed one or two more PIs, I also found just a face plate (model RM1U ~ $11) and individual tray units (RM1U-3 ~$76 each). Here is a picture of the enclosure on the bottom, and the face plate on the top.

Each of the tray units have a “SATA shield” logic board with SATA connector for the SSD drive on the bottom, a USB connector on the top that can be connected to the USB3 port of the PI using the provided connector and a jumper.

There is a front panel with a LCD that can display IP address, CPU temp, disk usage, and RAM usable (small python app that can be tweaked). There are SD and SSD activity LEDs on the shield card that show, and they provide a jumper cable for the SD card of the PI so that it is accessible from the front panel (via the shield card). Lastly, there is a power switch, so that you can do a clean shutdown.

There is room for the PI’s PoE Hat and a fan connector on the shield card, so that you can attach one of the fans in the enclosure to one of the PIs. The face plate is made out for a Model 4B PI. the enclosure is pricey, but a really great way to place these into a rack, have a SSD drive connected, and be able to cleanly shutdown the units.

The RM1U is not enclosed, and there are no fans, but I wasn’t concerned, as this unit would be in the basement, which is cool year round. I don’t know whether UCTRONICS will make something for the Raspberry PI 5s or how long they will make these rack mounts and enclosures, but it was a nice way for me to bundle things up.

For each of the Raspberry PIs, they will have a fixed IP address and a unique name (versus having node1, node2,…). I chose to have my router reserve IP addresses, outside of the range used for DHCP. Alternately, you could configure each PI with a static IP address.

Part II will discuss how to prepare the PIs for cluster use.

Category: bare-metal, Kubernetes, Raspberry PI | Comments Off on Part I: Raspberry PI Kubernetes Cluster Goals
November 29

Dual-Stack Kubernetes on bare-metal with LazyJack

v1.0

Preliminary support has been added to Lazyjack as of 1.3.5! Now, as of Kubernetes 1.13, the KEP for dual-stack is still under review, and only a few changes have been made to the code, but you can bring up a cluster in dual-stack mode.  You will only see one family of IPs for pods displayed via “kubectl get pod”, but if you look on the pods, you will see both IPv4 and IPv6 addresses.

I’ve already updated kubeadm-dind-cluster to support dual-stack for clusters brought up on a single node, using docker-in-docker, but now Lazyjack supports this too, on bare-metal nodes.

The config.yaml file for Lazyjack will have these changes:

  • A second CIDR can be specified for the management and pod networks, by using the “cidr2” field, under the respective sections. You can specify one family under “cidr” and one under “cidr2”.
  • The service network CIDR will specify which family is used for the service network. The KEP only supports a single IP family for service networks at this time.
  • Omit the DNS64 and NAT64 sections, which are not used in dual-stack mode.
  • The ‘dns64’ and ‘nat64’ operational modes are note specified under the opmodes field any nodes.

Here is an example config that is using IPv6 for the service network:

general:
    mode: dual-stack
    plugin: ptp
    insecure: true
    kubernetes-version: "v1.13.0-alpha.3"
    work-area: "/home/c2/bare-metal/work-area"
topology:
    minion1:
        interface: "enp10s0"
        opmodes: "minion"
        id: 2
    minion2:
        interface: "enp9s0"
        opmodes: "minion"
        id: 3
    my-master:
        interface: "enp10s0"
        opmodes: "master"
        id: 4
mgmt_net:
    cidr: "10.192.0.0/16"
    cidr2: "fd00:20::/64"
pod_net:
    cidr: "10.244.0.0/16"
    cidr2: "fd00:40::/72"
service_net:
    cidr: "fd00:30::/110"

 

 

 

 

 

 

 

Category: bare-metal, Kubernetes | Comments Off on Dual-Stack Kubernetes on bare-metal with LazyJack
November 8

Dual-stack Kubernetes with kubeadm-dind-cluster

V1.0

Overview

A coworker has pushed out a Kubernetes Enhancement Proposal (KEP) for dual-stack Kubernetes that is currently under review by the community. This capability is currently targeted for the 1.14 release.https://github.com/kubernetes/kubernetes/pull/70659

This proposal will provide IPv4 and IPv6 addresses for all containers (pod network) and nodes (management network), allowing communication with other pods and external resources with either protocol. To simplify this first release will use a single IP family for services, meaning the service network will either be IPv4 or IPv6.

What’s Up?

We’ve started implementing some changes to support dual-stack (as WIP, in some cases, because the KEP is not approved yet). To support that, I’ve modified the kubeadm-dind-cluster provisioning tool (a.k.a k-d-c) so that we can experiment with bringing up a cluster with dual-stack networking, during development.

The changes include setting the CNI configuration files for dual-stack, adding static routes for the Bridge or PTP plugin so that pods can communicate with either IP family across nodes, adjust the KubeAdm configuration file so that the API will use a specific IP family, and does not make use of the DNS64/NAT64 capabilities as both IP families are available on each container.

I’ve verified that we can bring up a cluster in dual-stack mode, with pod to pod (across nodes) and pod to external connectivity using both IPv4 and IPv6. I’ve used IPv4 for the service network, and with PR 70659 (under review as of today), I have verified a cluster with an IPv6 service network.

Granted, there are things that don’t work yet, as much of the KEP needs to be implemented (like service endpoints and pod status API), but it was very satisfying to see a PoC cluster come up.

How To…

To try this out, there are a few preparation steps. First, clone the kubeadm-dind-cluster repo.

cd
git clone https://github.com/kubernetes-sigs/kubeadm-dind-cluster.git dind

 

Next, clone Kubernetes in a subdirectory underneath k-d-c:

cd ~/dind
git clone https://github.com/kubernetes/kubernetes.git

Within the Kubernetes repo, grab my PR that is out for review (or wait until this is merged):

cd kubernetes
git fetch origin pull/70659/head:pr70659
git checkout pr70659

Now, you can bring up a cluster in dual-stack mode, using the desired service network IP family. You can set the dual stack mode:

export IP_MODE=dual-stack

And since we are customizing the Kubernetes code, we need to tell k-d-c to build a new image:

export DIND_IMAGE=mirantis/kubeadm-dind-cluster:local
export BUILD_KUBEADM=y
export BUILD_HYPERKUBE=y

If you are in a lab, and need to use a company DNS server, you can also set REMOTE_DNS64_V4SERVER.

Now, let’s build a new k-d-c image:

cd ..
build/build-local.sh
cd kubernetes

 

To use an IPv6 service network, you can just bring up the cluster using the default values:

../dind-cluster.sh up

 

To use IPv4, you’ll need to first set SERVICE_CIDR to an IPv4 CIDR, before bringing up the cluster. You can use the same value that k-d-c uses for IPv4 only networks, like:

export SERVICE_CIDR="10.96.0.0/12"

 

Then, just use the same “up” command to bring things up.

In each of these modes, you’ll see either IPv4 or IPv6 addresses, when doing a “kubectl get pods –all-namespaces -o wide” command. The pods will still have both IPv4 and IPv6 addresses, and from the pods, you’ll be able to ping and ping6 to external IPv4 and IPv6 sites, respectively.

 

Futures…

I haven’t played with external access to the cluster, and obviously there is work to do for the APIs and kube-proxy, along with changes to kubeadm (see the KEP for details).

I’m working on updating my Lazyjack tool that helps with provisioning Kubernetes on bare-metal nodes, so that it too can bring up dual-stack clusters. This will provide feature parity with k-d-c, only using separate physical nodes, instead of Kubernetes running on node containers (using Docker-in-docker) on a single host.

 

Category: Kubernetes | Comments Off on Dual-stack Kubernetes with kubeadm-dind-cluster
October 22

Lazyjack 1.3.2 New Features

Several new capabilities have been added to Lazyjack recently:

  • Supports IPv4 only mode, so clusters can be created with IPv4 addresses.
  • The kubeadm.conf file generated will use templates that are version specific. This allows easier customizing of the configuration easily. Supports Kuberenetes 1.10-1.13, although experiencing some issues using the alpha 1.13 setup.
  • Clusters can be configured for insecure mode, where init is not needed, and the config YAML file doesn’t have to be copied over to minions (as it is not updated with a token). This makes it easier to start up a cluster, by just running the prepare and up steps.

Time permitting, I hope to add dual stack capabilities.

 

Category: bare-metal, Kubernetes | Comments Off on Lazyjack 1.3.2 New Features
August 15

Lazyjack IPv6 Updates

v1.1

Since its introduction, and as of  V1.2.1 Lazyjack, several new capabilities have been added…

  • Support for PTP CNI plugin. User can specify “ptp” in config.yaml for “General: Plugin”, instead of the default “bridge” setting.
  • DNS64 configuration is stored in a volume, instead of host local file. This provides a more secure setup for the container.
  • Documentation updated to indicate how to use new capabilities, and how to customize cluster setup.
  • NAT64 dynamic IPv4 pool is configurable. The CIDR specified in “nat64: v4_cidr” of config.yaml can be adjusted to allow different subnets to be used, in case of conflicts.
  • Customizable MTU for pod/management network.  The “pod_net: mtu” setting in config.yaml can be used to set the MTU used.
  • Direct access to IPv6 external hosts without using DNS644 prefix. Setting `dns64: allow_aaaa_use` in config.yaml to “true” allows IPv6 capable external sites to be accessed directly.
  • Removed hard-coded Kubernetes version in kubeadm.conf template, so that user can specify version to be used.

Other features, like running kube-proxy in IPVS mode, or selecting CoreDNS as the DNS server, instead of kube-dns, can be enabled, by altering the kubeadm.conf file that is created by the “prepare” step, and then perform the “up” step. see the README.md file for more info.

Note: For security purposes, it is strongly recommended that you set “general: work-area” to an area that has access restricted. The default area, “/tmp”, could be prone to attacks, by users without the required permissions.

Category: Kubernetes | Comments Off on Lazyjack IPv6 Updates
July 20

How to recover (I think) from a botched Kubernetes update

v1.0

What Happened?

I was using KubeAdm v1.10 and wanted to give the latest Kubernetes from master a try. I (unfortunately) just updated the binaries for kubeadm, kubectl, and kubelet.  I restarted the kubelet daemon (“sudo sytemctl restart kubelet”), and then ran “kubeadm init” hoping to sit back and watch the new cluster come up.

First, I found that the config file needed a newer API version, so I changed that to use “kubeadm.k8s.io/v1alpha2”, instead of “kubeadm.k8s.io/v1alpha1”, and tried “kubeadm init” again.

Well, it failed to come up, and it looking at the issue, I found that the kubelet configuration file, /etc/systemd/system/kubelet.service.d/10-kubeadm.conf was referring to a kubelet config file that did not exist:

Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"

 

I had no clue how to create this file, nor why it wasn’t there.

 

What Should Have Been Done?

It looks like, going from v1.10 to a newer version, the upgrade procedures should be used. One needs to go from one minor release to minor release at at time (1.10 -> 1.11, 1.11 -> 1.12,…). In this process, one can use “kubeadm config migrate –old-config kubeadm.conf –new-config new-kubeadm.conf, to update the config file. You can then change the API version, and set the Kubernetes version, before using the config file in the update.

I’m not sure what you would do, if you didn’t have a running v1.10 cluster, as this method seems to imply that is needed. Maybe you’d end up in the same state as I was in.

 

What to do, if you didn’t do the upgrade?

From what I can tell, it appears that v1.11+ needs /var/lib/kubelet/config.yaml, which doesn’t exist in v1.10. It will get generated when kubeadm init is invoked, and removed when kubeadm reset is done. But, when I had a previous v1.10 install, it was not getting created during init, and with this file missing, init fails. That file was the key to try to recover from my mess.

On a fresh system install, I brought up Kubernetes v1.11, using KubeAdm and the same config file that I was using on the corrupt system. I took the config.yaml (this one is what I had, YMMV) that was created, and placed it on the system that was corrupted, and then brought up the cluster with “kubeadm init”.

The cluster came up OK with that change. Oddly enough, doing a “kubeadm reset” remokved the file, and it was recreated the next time I did “kubeadm init”. I’m not sure why, when I had v1.10 setup, and then switched to v1.11, the file was not created. In any case, I’m happy it is working now.

I did see another problem though, and I’m not sure if it is related. Once I brought up the master node, set the bridge CNI plugin config file, and untainted the node, I created some alpine pods. From the pod, I could not ping other pods on the node. Looking at the iptables rules, I was seeing this rule…

-P FORWARD DROP

instead of…

-P FORWARD ACCEPT

I flushed all the iptables rules, and then brought up the cluster with KubeAdm again, and after untainting and creating pods, everything seemed to work just peachy.

 

 

Category: Kubernetes | Comments Off on How to recover (I think) from a botched Kubernetes update
July 20

Using kube-router with kubeadm-dind-cluster

v1.0

What is kube-router?

One of the options for networking in Kubernetes, is to use kube-router. This plugin uses the Bridge CNI plugin and go BGP to provide networking for the cluster.

Why use it?

With kube-router, it uses the IPVS (IP Virtual Server) kernel module, instead of iptables rules. This gives much better performance (hash vs serial lookups) and scales much better. Kube-router also uses goBGP, to provide full mesh connectivity using iBGP, instead of requiring static routes, when using the bridge plugin.

What is kubeadm-dind-cluster?

The kubeadm-dind-cluster tool that I’ve mentioned here before, allows you to create a Kubernetes cluster on single host (VM or bare-metal), by using Docker-in-Docker (it creates docker containers, which will be nodes where KubeAdm is invoked to bring up a cluster).

This tool is nice, because it saves you from doing all the tedious steps of setting up a cluster using KubeAdm manually. There are instructions in the kubeadm-dind-cluster repo, on how to use the tool to bring up a cluster.  The tools supports the bridge, calico, flannel, and weave CNI plugins.

Peanut Butter and Chocolate…

I have a PR out in kubernetes-sig/kubeadm-dind-cluster repo to add support for using kube-router, instead of kube-proxy. To use this, you can perform the following steps (assuming you have a kubeadm-dind-cluster repo pulled):

  1. Patch in the PR changes
    1. git fetch origin pull/159/head:pr159
    2. git log –abbrev-commit pr159 –oneline -n 1 | cut -f 1 -d” “
    3. git cherry-pick <# from log output>
  2. build/build-local.sh
  3. export DIND_IMAGE=mirantis/kubeadm-dind-cluster:local
  4. export CNI_PLUGIN=kube-router
  5. Bring up the cluster “./dind-cluster.sh up”

This will skip the normal bridge CNI plugin setup and creation of static routes, run a YAML file to configure the bridge CNI plugin and startup kube-router pods on each node (which will start up BGP), and will then remove the kube-proxy daemonset.

Once the cluster is up, you can “kubectl exec” into one of the kube-router pods to see the BGP and IPVS setup.

After the PR (159) is upstreamed, you’ll only need to set the CNI_PLUGIN to kube-router, and then bring up the cluster.

 

Limitations

Currently, this only works with IPv4. Although ipset and goBGP support IPv6, kube-router is not set up to run in IPv6 mode. There is a PR to add IPv6 support.

Category: Kubernetes | Comments Off on Using kube-router with kubeadm-dind-cluster
June 6

IPv6 Kubernetes – Improving External Access Performance

v1.2 – June 14th 2018

Summary

With current Kubernetes IPv6 only clusters (v1.9.0+), a brute force approach was taken, to deal with the outside world. Since there are some external sites that are IPv4 only, Kubernetes was set up with a NAT64 and DNS64 server to treat all external destinations as IPv4 only.

Here, we’ll talk about ways to more intelligently handle external sites, using IPv6 access, when possible. The result is an improvement in performance, both in space and time.

 

What We Have Today

Let’s use an example of a pod on a minion node of a three node, bare-metal, IPv6 only Kubernetes cluster, trying to ping google.com.

First, the pod requests a lookup of the destination name, to obtain the IP address. Since not all destinations support IPv6 (e.g. github.com), the DNS64 server in the cluster is configured to always use the A record (IPv4) and ignore any AAAA record (IPv6). The IPv4 address will be embedded into a synthesized IPv6 address, using the configured prefix. In this example, the address 216.58.217.78 is combined with the fd00:10:64:ff9b:: prefix to get fd00:10:64:ff9b::d83a:d94e.

The pod (fd00:40::3:0:0:4e7) will then send a ping request, out it’s interface (to fd00:10:64:ff9b::d839:d94e), as shown at (A) in the diagram below.

The ping request will cross the local bridge, br0, and the routing table on the node will direct the packet, over the pod network, to the master node. The packet will be sent (B) from the minion node’s eth1 interface (fd00:20::3) to the master node’s pod network interface (eth1). The route on the master node, will direct the packet to the NAT64 server (a container), over the veth interface.

The NAT64 server (C) creates mapping of source IPv6 address (at this point the minion node’s pod network interface fd00:20::3) to a private IPv4 address (172.18.0.53) from a locally maintained pool. It will extract the destination IPv4 address (216.58.217.78) and send the ping to the master node (D), where iptables employs SNAT to map the private IPv4 address to the node’s IPv4 address (e.g. 10.1.1.2).

Finally, the packet is sent out the main interface (E) to the next hop, which would also do SNAT for this local IPv4 address.

The ping response would follow the reverse route thought the NAT64 server, to the minon node, and finally the pod.

 

Improvements For IPv6 External Sites

We can, however, configure the DNS64 to allow AAAA records to be used, for external destinations that support IPv6 addressing.

In this example,  the DNS lookup would return the AAAA record for google.com (2607:f8b0:4004:801::200e) and the pod shown at (A) would send a ping to that address, as shown in the diagram below.

The ping request would traverse the local bridge, br0, and the routing table on the minion node would direct the packet out the main interface (eth0), and using SNAT, would use the IP of the node as the source address (2001:db8::100), as shown at (B). The packet would be sent to the next hop, where SNAT may occur, if the minion node’s IPv6 address is not public.

The ping response would follow the reverse route, into the minion node, and to the pod.

This avoids sending the packets to the master node’s NAT64 server, where translation and mapping is performed, both a time and space savings (no mapping table needed).

 

Bare Metal Implementation Details

The Lazyjack tool has been modified (in v1.1.0+) to allow the user to specify whether or not destinations that support IPv6 addressing can be directly accessed, without using NAT64.

Under the dns64 section in the config.yaml, there is a new entry titled “allow_aaaa_use”, which if set to “true”, will use the AAAA records from DNS64 and directly access external IPv6 addresses. If omitted, or set to “false”, the existing mechanism of using only the A DNS record and performing NAT64 on all packets for external destinations.

Before using Lazyjack, the nodes of the cluster must be provisioned for IPv6. One each node, this includes:

  • Enabling IPv6 and IPv6 forwarding on main interface.
  • Giving the main interface (with Internet access) an IPv6 address (we used SLAAC).
  • Having a default IPv6 route that sends traffic out the main interface (done via SLAAC).
  • To preserve the default route, set sysctl accept_ra with a value of two. For example:
sudo sysctl net.ipv6.conf.eth0.accept_ra = 2

 

KubeAdm-dind-cluster (DinD) Implementation Details

As of PR 148 merging, the Kubeadm-dind-cluster tool (note the new repo location) for provisioning clusters has been updated to allow the user to enable the ability to use (IPv6) AAAA records for DNS lookups, so that unaltered IPv6 addresses can be used, rather than forcing the use of (IPv4) A records and requiring DNS64 to be used. This new capability can be enabled by setting the environment variable, DIND_ALLOW_AAAA_USE=true.

The k-d-c tool will then use a modified DNS64 configuration, and create the needed ip6tables entries on the host to allow forwarding of packets to the kubeadm-dind-net bridge, and perform SNAT for outgoing packets.

You can check the PR, and once merged, use the latest code on the master branch.

 

Category: bare-metal, Kubernetes | Comments Off on IPv6 Kubernetes – Improving External Access Performance
March 14

KubeAdm with Local Kubernetes Repo for IPv6

V1.4

Goal

In working with my lazyjack tool to create IPv6 based Kubernetes clusters on bare metal systems, I wanted to run the latest code on master (1.10.0-beta.2 at this time) to run E2E tests and possibly tweak things. So, I needed to be able to run KubeAdm with my own repo, instead of using something prebuilt from upstream.

In addition, I wanted to make sure that I could do some customizing with lazyjack.

 

Preparation

As described in my blog post on lazyjack, I have a three node, bare-metal setup, with a second interface connected to a physical switch, for the Kubernetes management/pod network. The lazyjack tool is installed on the nodes and I have a config.yaml with the network topology, including all IP addresses desired. Developement tools (go, git, etc) are installed on the node used as the master node.

I’m using Ubuntu 16.04 on each of my systems.

Next, I pulled down the latest Kubernetes code:

mkdir -p ~/go/src/k8s.io
cd ~/go/src/k8s.io
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes

 

Now, we are ready to set things up to use this repo, for the Kuberentes cluster.

 

Steps

Repo Prep

I checked out a branch that had the version I wanted. At the time of this writing, beta 2 of 1.10 was available:

cd ~/go/src/k8s.io/kubernetes
git checkout -b release-1.10 origin/release-1.10
git checkout v1.10.0-beta.2

 

Building/Installing

Next, I built everything and installed kubectl, kubeadm, and kubelet binaries, so that we’re using the latest for everything. Restarted kubelet to get the new version running:

make clean
make
make release

cd _output/bin/
sudo cp kubeadm kubectl kubelet /usr/bin/
sudo systemctl daemon-reload
sudo systemctl restart kubelet

 

I copied these three binaries over to the two minion systems, placed them in /usr/bin, and restarted kubelet to update them as well.

For the release images, they are in TAR files, which can be loaded into docker:

cd ~/go/src/k8s.io/kubernetes/_output/release-images/amd64
for f in *.tar; do docker load -i $f; done

 

Startup Local Registry

The docker daemon needs to know about an insecure registry that is being created on one of the nodes. Create the following file on each system:

cat > /etc/docker/daemon.json <<EOT
{
    "insecure-registries": ["10.86.7.77:5000"]
}
EOT
systemctl restart docker

 

I guess I could have used the management IP for the master node ([fd00:20::2]), but I decided to use the admin interface IP of the machine on the lab network. In this case, it was 10.86.7.77. You would replace this, with your master node’s IP address.

On the master node, you want to start the local registry:

docker run -d -p 5000:5000 --restart always --name registry registry:2

 

Tagging images

Note: If there is an easier way than this (like some make target), please let me know…

After making the release and doing a docker load for each tar file, there are a bunch of images in the local registry. We need to tag these images with our master node’s IP and port 5000 (10.86.7.77:5000 in my case). This is a bit complicated, as most of the images built do not have the -amd64 suffix and the tag used has an underscore, which doesn’t play well in the sandbox. Hopefully there is an easier way, but this is what I did…

I first found out the list of images, by doing “docker images” to find out the tag that was created (e.g.v1.10.0-beta.2.17_3d19fe4010c246-dirty). Here’s the list of images:

k8s.gcr.io/hyperkube-amd64
gcr.io/google_containers/kube-apiserver
k8s.gcr.io/kube-apiserver
gcr.io/google_containers/kube-controller-manager
k8s.gcr.io/kube-controller-manager
gcr.io/google_containers/cloud-controller-manager
k8s.gcr.io/cloud-controller-manager
k8s.gcr.io/kube-aggregator
gcr.io/google_containers/kube-aggregator
gcr.io/google_containers/kube-scheduler
k8s.gcr.io/kube-scheduler
gcr.io/google_containers/kube-proxy
k8s.gcr.io/kube-proxy

With that, I could filter by the tag and build up the commands needed to tag these images for my local repo (10.86.7.77:5000). You can see that I had to strip out the registry name to just have the image name part for the local repo tag:

docker images \
    --format="docker tag {{.Repository}}:{{.Tag}} 10.86.7.77:5000/{{.Repository}}:v1.10.0-beta.2" | \
    grep v1.10.0-beta.2.17_3d19fe4010c246 | \
    sed -e "s?5000/gcr.io/google_containers?5000?" | \
    sed -e "s?5000/k8s.gcr.io?5000?" > x
chmod 777 x

 

You can do the same as above, only replace the tag (e.g.v1.10.0-beta.2.17_3d19fe4010c246-dirty) with what you have for a tag. Also, since we need images with the -amd64 suffix, the same command can be tweaked to create those tags too:

docker images \
    --format="docker tag {{.Repository}}:{{.Tag}} 10.86.7.77:5000/{{.Repository}}-amd64:v1.10.0-beta.2" | \
    grep v1.10.0-beta.2.17_3d19fe4010c246 | \
    sed -e "s?5000/gcr.io/google_containers?5000?" | \
    sed -e "s?5000/k8s.gcr.io?5000?" > y
chmod 777 y

 

Before invoking “y”, you need to pull the “hyperkube-amd64” line (first one?), as it already has the right suffix. Now, you can invoke these two files and create all the tags needed. Now, I don’t know if you need the tags in file “x” (other than the hyperkube-amd64), so you could try skipping running that file and just do the first line, along with running file “y”. In my Kubernetes cluster, I think all the images had the -amd64 suffix.

 

Pushing Images

With everything tagged, you can then push these images to your local repo. I did this command, to first check that I have the right syntax for my tags:

docker images | grep 10.86.7.77:5000
10.86.7.77:5000/hyperkube-amd64 v1.10.0-beta.2 351430a5275d 3 days ago 633 MB
10.86.7.77:5000/kube-apiserver v1.10.0-beta.2 2e8a0bd89199 3 days ago 224 MB
10.86.7.77:5000/kube-apiserver-amd64 v1.10.0-beta.2 2e8a0bd89199 3 days ago 224 MB
10.86.7.77:5000/kube-controller-manager v1.10.0-beta.2 80ea4fb85ccb 3 days ago 147 MB
10.86.7.77:5000/kube-controller-manager-amd64 v1.10.0-beta.2 80ea4fb85ccb 3 days ago 1
...

 

If that looks good (registry, image name, and tag), create and invoke the following file in order to push them up:

docker images --format="docker push {{.Repository}}:{{.Tag}}" | \
    grep 10.86.7.77:5000 > z
chmod 777 z
./z

 

Don’t Forget These Images…

Once I tried this all out, I hit a problem with missing images. It turns out that KubeAdm is using some older versions of etcd (3.2.16) and kube-dns (1.14.8) images. To be sure I had these in my local registry (because everything will come from there), I needed to handle these specially:

docker pull gcr.io/google_containers/etcd-amd64:3.2.16
docker tag gcr.io/google_containers/etcd-amd64:3.2.16 10.86.7.77:5000/etcd-amd64:3.2.16
docker push 10.86.7.77:5000/etcd-amd64:3.2.16
docker tag gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.8 10.86.7.77:5000/k8s-dns-kube-dns-amd64:1.14.8
docker push 10.86.7.77:5000/k8s-dns-kube-dns-amd64:1.14.8
docker tag gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8 10.86.7.77:5000/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker push 10.86.7.77:5000/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.8 10.86.7.77:5000/k8s-dns-sidecar-amd64:1.14.8
docker push 10.86.7.77:5000/k8s-dns-sidecar-amd64:1.14.8

As you can see, I didn’t have the etcd image, and had to pull it first. You can check with “docker images” to see if you have to pull any images, before tagging and pushing. Note: For some systems, I’ve had to pull from k8s.gcr.io instead of gcr.io/google_containers.

The versions of etcd and kube-dns that KubeAdm uses are hard coded in the code, so I had to find out by trial and error. Looking at logs, I noticed that these pods were not coming up, and saw what image versions were being tried in the pulls (e.g. 10.86.7.77:5000/etcd-amd64:3.2.16). I would then, do a pull of that version from k8s.gcr.io or gcr.io/google_containers and push it up to my local repo.

 

Bringing Up The Cluster

To bring things up quickly, I’m using a current release (v1.0.6) of my lazyjack tool, but you can do the same manually, if you’re a masochist :). I installed it in /usr/local/bin so that it is in my path.

I created a config.yaml for my setup to represent the topology and addresses that I wanted to use (see lazyjack README.md for details on this file).

For this effort, we need to customize the generate kubeadm.conf file. As a convenience, I overrode the default work area to point to are area under the account I was using (though you can use the default /tmp/lazyjack/ area, if desired):

general:
   plugin: bridge
   work-area: "/home/c2/bare-metal/work-area"

 

I ran “sudo lazyjack init” to create the certificates needed and place them into the config.yaml file. I copied this updated YAML file to the minion nodes, which also have lazyjack installed.

Next, I ran “sudo lazyjack prepare” on the master, and each of the minion nodes. There should be bind9 and tayga containers running on the master, for the DNS64 and NAT64 servers, respectively. This step will also create a kubeadm.conf file in the work area.

To bring up the cluster with the local repo, we need to edit that file to change the kubernetesVersion line to the version tag we are using (instead of 1.9.0 that the tool created), and add the imageRepository line pointing to our repo. In this example, I used:

kubernetesVersion: v1.10.0-beta.2
imageRepository: 10.86.7.77:5000

 

Now, on the master, I ran “sudo lazyjack up”. This takes a few minutes, and the output should indicate success and provide the lines to setup kubectl:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

Run kubectl and make sure that all the pods and services are up and running. Then, you can run the “up” command on the other nodes and check with kubectl that the nodes area “ready” and the additional proxy pods are running.

You should be able to then use your IPv6 based cluster, running code from your local repo!

Issues

Please be sure to use docker version 17.03, as versions 17.06, 17.09, 17.12, 18.03, and 18.04 are showing that, even with the host enabling IPv6, any containers created have IPv6 disabled. The effect seen is that the kube-dns pod is stuck in “creating” state, and logs show that the CNI plugin is failing to add an IPv6 address to the pod (permission denied error). This is true with any user create pods that use the pod network and need an IPv6 address from the CNI plugin. This is discussed in this CNI issue and in this docker issue.

 

Category: bare-metal, Kubernetes | Comments Off on KubeAdm with Local Kubernetes Repo for IPv6