Dual-stack Kubernetes with kubeadm-dind-cluster
A coworker has pushed out a Kubernetes Enhancement Proposal (KEP) for dual-stack Kubernetes that is currently under review by the community. This capability is currently targeted for the 1.14 release.https://github.com/kubernetes/kubernetes/pull/70659
This proposal will provide IPv4 and IPv6 addresses for all containers (pod network) and nodes (management network), allowing communication with other pods and external resources with either protocol. To simplify this first release will use a single IP family for services, meaning the service network will either be IPv4 or IPv6.
We’ve started implementing some changes to support dual-stack (as WIP, in some cases, because the KEP is not approved yet). To support that, I’ve modified the kubeadm-dind-cluster provisioning tool (a.k.a k-d-c) so that we can experiment with bringing up a cluster with dual-stack networking, during development.
The changes include setting the CNI configuration files for dual-stack, adding static routes for the Bridge or PTP plugin so that pods can communicate with either IP family across nodes, adjust the KubeAdm configuration file so that the API will use a specific IP family, and does not make use of the DNS64/NAT64 capabilities as both IP families are available on each container.
I’ve verified that we can bring up a cluster in dual-stack mode, with pod to pod (across nodes) and pod to external connectivity using both IPv4 and IPv6. I’ve used IPv4 for the service network, and with PR 70659 (under review as of today), I have verified a cluster with an IPv6 service network.
Granted, there are things that don’t work yet, as much of the KEP needs to be implemented (like service endpoints and pod status API), but it was very satisfying to see a PoC cluster come up.
To try this out, there are a few preparation steps. First, clone the kubeadm-dind-cluster repo.
cd git clone https://github.com/kubernetes-sigs/kubeadm-dind-cluster.git dind
Next, clone Kubernetes in a subdirectory underneath k-d-c:
cd ~/dind git clone https://github.com/kubernetes/kubernetes.git
Within the Kubernetes repo, grab my PR that is out for review (or wait until this is merged):
cd kubernetes git fetch origin pull/70659/head:pr70659 git checkout pr70659
Now, you can bring up a cluster in dual-stack mode, using the desired service network IP family. You can set the dual stack mode:
And since we are customizing the Kubernetes code, we need to tell k-d-c to build a new image:
export DIND_IMAGE=mirantis/kubeadm-dind-cluster:local export BUILD_KUBEADM=y export BUILD_HYPERKUBE=y
If you are in a lab, and need to use a company DNS server, you can also set REMOTE_DNS64_V4SERVER.
Now, let’s build a new k-d-c image:
cd .. build/build-local.sh cd kubernetes
To use an IPv6 service network, you can just bring up the cluster using the default values:
To use IPv4, you’ll need to first set SERVICE_CIDR to an IPv4 CIDR, before bringing up the cluster. You can use the same value that k-d-c uses for IPv4 only networks, like:
Then, just use the same “up” command to bring things up.
In each of these modes, you’ll see either IPv4 or IPv6 addresses, when doing a “kubectl get pods –all-namespaces -o wide” command. The pods will still have both IPv4 and IPv6 addresses, and from the pods, you’ll be able to ping and ping6 to external IPv4 and IPv6 sites, respectively.
I haven’t played with external access to the cluster, and obviously there is work to do for the APIs and kube-proxy, along with changes to kubeadm (see the KEP for details).
I’m working on updating my Lazyjack tool that helps with provisioning Kubernetes on bare-metal nodes, so that it too can bring up dual-stack clusters. This will provide feature parity with k-d-c, only using separate physical nodes, instead of Kubernetes running on node containers (using Docker-in-docker) on a single host.