Using kube-router with kubeadm-dind-cluster
v1.0
What is kube-router?
One of the options for networking in Kubernetes, is to use kube-router. This plugin uses the Bridge CNI plugin and go BGP to provide networking for the cluster.
Why use it?
With kube-router, it uses the IPVS (IP Virtual Server) kernel module, instead of iptables rules. This gives much better performance (hash vs serial lookups) and scales much better. Kube-router also uses goBGP, to provide full mesh connectivity using iBGP, instead of requiring static routes, when using the bridge plugin.
What is kubeadm-dind-cluster?
The kubeadm-dind-cluster tool that I’ve mentioned here before, allows you to create a Kubernetes cluster on single host (VM or bare-metal), by using Docker-in-Docker (it creates docker containers, which will be nodes where KubeAdm is invoked to bring up a cluster).
This tool is nice, because it saves you from doing all the tedious steps of setting up a cluster using KubeAdm manually. There are instructions in the kubeadm-dind-cluster repo, on how to use the tool to bring up a cluster. The tools supports the bridge, calico, flannel, and weave CNI plugins.
Peanut Butter and Chocolate…
I have a PR out in kubernetes-sig/kubeadm-dind-cluster repo to add support for using kube-router, instead of kube-proxy. To use this, you can perform the following steps (assuming you have a kubeadm-dind-cluster repo pulled):
- Patch in the PR changes
- git fetch origin pull/159/head:pr159
- git log –abbrev-commit pr159 –oneline -n 1 | cut -f 1 -d” “
- git cherry-pick <# from log output>
- build/build-local.sh
- export DIND_IMAGE=mirantis/kubeadm-dind-cluster:local
- export CNI_PLUGIN=kube-router
- Bring up the cluster “./dind-cluster.sh up”
This will skip the normal bridge CNI plugin setup and creation of static routes, run a YAML file to configure the bridge CNI plugin and startup kube-router pods on each node (which will start up BGP), and will then remove the kube-proxy daemonset.
Once the cluster is up, you can “kubectl exec” into one of the kube-router pods to see the BGP and IPVS setup.
After the PR (159) is upstreamed, you’ll only need to set the CNI_PLUGIN to kube-router, and then bring up the cluster.
Limitations
Currently, this only works with IPv4. Although ipset and goBGP support IPv6, kube-router is not set up to run in IPv6 mode. There is a PR to add IPv6 support.