Kubernetes on a lab system behind firewall
After numerous tries, I think I finally came across a setup that will allow me to run Kubernetes (via KubeAdm), using the Calico plugin, and a bare metal system, that is behind a firewall and needed proxy to access the outside. This blog describes the process I used to get this to work.
Preparation for CentOS
On the bare metal system (a Cisco UCS), running CentOS 7.3, the needed packages need to be installed. First, is to update with the kubernetes repo:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 EOF
I ran “yum update -y” to update the system. Next, the packages need to be installed. Note: I had set up this system weeks before, so hopefully I’ve captured all the steps (if not, let me know):
setenforce 0 yum install -y docker kubelet kubeadm kubectl kubernetes-cni
I did recall at one point of hitting a conflict with the docker install, with what was on the system (maybe from mucking around on this system installing things before). In any case, make sure docker is installed and working. In my system, “docker version” shows 1.13. You may want to check “docker version” first, and if already installed, skip trying to reinstall.
Preparation for Ubuntu 16.04
For Ubuntu, the Kubernetes repo needs to be added along with keys, and then everything installed.
sudo su curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - echo deb http://apt.kubernetes.io/ kubernetes-xenial main >> /etc/apt/sources.list.d/kubernetes.list apt-get update -y apt-get install -y kubelet kubeadm kubectl kubernetes-cni
Proxy Setup
With everything installed (I hope :)), I next set the proxy up with http_proxy and https_proxy (lower and uppercase environment variables) pointing to the proxy server, and no_proxy set to IPs that should not go through the proxy server. For this system, no_proxy had the host IP, 127.0.0.1, and then the IPs for the IPv4 pool and IPs for the service IPs. The defaults use large subnets, so I reduced these to help make the no-proxy setting more manageable.
For the IPv4 pool, I’m using 192.168.0.0/24 (reduced size from default), and for the service IP subnet, I’m using 10.20.30.0/24 (instead of 10.96.0.0/12). I used these lines in .bashrc to create the no_proxy setting:
printf -v lan '%s,' 10.86.7.206 printf -v pool '%s,' 192.168.0.{1..253} printf -v service '%s,' 10.20.30.{1..253} export no_proxy="cisco.com,${lan%,},${service%,},${pool%,},127.0.0.1"; export NO_PROXY=$no_proxy
Make sure you’ve got these environment variables sourced.
Update: Alternative Proxy Setup
You can keep the default 10.96.0.10 IP in 10-kubeadm.conf, and instead use “–service-cidr=10.96.0.0/24” on the kubeadm init line, to reduce the size of the subnet.
In the .bashrc file, use this for service pool:
printf -v lan '%s,' 10.87.49.77 printf -v pool '%s,' 192.168.0.{1..253} printf -v service '%s,' 10.96.0.{1..253} export no_proxy="cisco.com,${lan%,},${service%,},${pool%,},127.0.0.1"; export NO_PROXY=$no_proxy
Calico.yaml Configuration
Obtain the latest calico.yaml (I used this one from a tutorial – https://github.com/gunjan5/calico-tutorials/blob/master/kubeadm/calico.yaml – commit a10bfd1d, but you may have success with http://docs.projectcalico.org/master/getting-started/kubernetes/installation/hosted/, I just haven’t tried it, or sorted out the differences).
Two changes are needed to this file. The etcd_endpoints needs to specify the host IP, and the ippool cidr should be changed from /16 to /24, so that we have a manageable number of no_proxy entries.
Since we are changing the default subnet for services, I changed /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to use 10.20.30.10 for cluster-dns arg of KUBELET_DNS_ARGS environment setting. Be sure to restart the systemd service (systemctl daemon-reexec) after making this change. Otherwise, when you start up the cluster, the services will show the new 10.20.30.x IP addresses, but the kubelet process will still have the default –cluster-dns value of 10.96.0.10. This threw me for a while, until Ghe Rivero mentioned this on the KubeAdm slack channel (thanks!).
Update: If you stick with 10.96.0.10 for cluster-dns, you don’t need to change 10-kubeadm.conf (skip the previous paragraph).
Are We There Yet?
Hopefully, I have everything prepared (I’ll know next time I try to set up from scratch). If so, here are the steps used to start things up (as root user!):
kubeadm init --api-advertise-addresses=10.86.7.206 --service-cidr=10.20.30.0/24
Update: If you use the alternative method for service subnet, you’ll use –service-cidr=10.96.0.0/24, and the IPs will be difference in “kubectl get svc” command below.
This will display the kubeadm join command, for other nodes to be added to cluster (I haven’t tried that yet for this setup).
kubectl taint nodes --all dedicated- kubectl apply -f calico.yaml kubectl get pods --all-namespaces -o wide
At this point (after some time), you should be able to see that all the pods are up, and have and IP address of the host, except for the DNS pod, which will have an IP from the 192.168.0.0/24 pool:
[root@bxb-ds-52 calico]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system calico-etcd-wk533 1/1 Running 0 7m 10.86.7.206 bxb-ds-52 kube-system calico-node-qxh84 2/2 Running 0 7m 10.86.7.206 bxb-ds-52 kube-system calico-policy-controller-2087702136-n19jf 1/1 Running 0 7m 10.86.7.206 bxb-ds-52 kube-system dummy-2088944543-3sdlj 1/1 Running 0 31m 10.86.7.206 bxb-ds-52 kube-system etcd-bxb-ds-52 1/1 Running 0 31m 10.86.7.206 bxb-ds-52 kube-system kube-apiserver-bxb-ds-52 1/1 Running 0 31m 10.86.7.206 bxb-ds-52 kube-system kube-controller-manager-bxb-ds-52 1/1 Running 0 31m 10.86.7.206 bxb-ds-52 kube-system kube-discovery-1769846148-lb51s 1/1 Running 0 31m 10.86.7.206 bxb-ds-52 kube-system kube-dns-2924299975-c95bg 4/4 Running 0 31m 192.168.0.128 bxb-ds-52 kube-system kube-proxy-n0pld 1/1 Running 0 31m 10.86.7.206 bxb-ds-52 kube-system kube-scheduler-bxb-ds-52 1/1 Running 0 31m 10.86.7.206 bxb-ds-52
You can also check that the services are in the service pool defined:
[root@bxb-ds-52 calico]# kubectl get svc --all-namespaces -o wide NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default kubernetes 10.20.30.1 <none> 443/TCP 32m <none> kube-system calico-etcd 10.20.30.2 <nodes> 6666/TCP 8m k8s-app=calico-etcd kube-system kube-dns 10.20.30.10 <none> 53/UDP,53/TCP 31m name=kube-dns
Now, you should be able to use kubectl to apply manifests for containers (I did one with NGINX), and verify that the container can ping other containers, the host, and other nodes on the host’s network.
What’s Next
I want to try to…
- Joining a second node and see if containers are placed there correctly.
- Retrying this process from scratch, to make sure this blog reported all the steps.