February 14

Kubernetes on a lab system behind firewall

After numerous tries, I think I finally came across a setup that will allow me to run Kubernetes (via KubeAdm), using the Calico plugin, and a bare metal system, that is behind a firewall and needed proxy to access the outside. This blog describes the process I used to get this to work.

Preparation for CentOS

On the bare metal system (a Cisco UCS), running CentOS 7.3, the needed packages need to be installed. First, is to update with the kubernetes repo:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

I ran “yum update -y” to update the system. Next, the packages need to be installed. Note: I had set up this system weeks before, so hopefully I’ve captured all the steps (if not, let me know):

setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni

I did recall at one point of hitting a conflict with the docker install, with what was on the system (maybe from mucking around on this system installing things before). In any case, make sure docker is installed and working. In my system, “docker version” shows 1.13. You may want to check “docker version” first, and if already installed, skip trying to reinstall.

Preparation for Ubuntu 16.04

For Ubuntu, the Kubernetes repo needs to be added along with keys, and then everything installed.

sudo su
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo deb http://apt.kubernetes.io/ kubernetes-xenial main >> /etc/apt/sources.list.d/kubernetes.list
apt-get update -y
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Proxy Setup

With everything installed (I hope :)), I next set the proxy up with http_proxy and https_proxy (lower and uppercase environment variables) pointing to the proxy server, and no_proxy set to IPs that should not go through the proxy server. For this system, no_proxy had the host IP,, and then the IPs for the IPv4 pool and IPs for the service IPs. The defaults use large subnets, so I reduced these to help make the no-proxy setting more manageable.

For the IPv4 pool, I’m using (reduced size from default), and for the service IP subnet, I’m using (instead of I used these lines in .bashrc to create the no_proxy setting:

printf -v lan '%s,'
printf -v pool '%s,' 192.168.0.{1..253}
printf -v service '%s,' 10.20.30.{1..253}
export no_proxy="cisco.com,${lan%,},${service%,},${pool%,},";
export NO_PROXY=$no_proxy

Make sure you’ve got these environment variables sourced.

Update: Alternative Proxy Setup

You can keep the default IP in 10-kubeadm.conf, and instead use “–service-cidr=” on the kubeadm init line, to reduce the size of the subnet.

In the .bashrc file, use this for service pool:

printf -v lan '%s,'
printf -v pool '%s,' 192.168.0.{1..253}
printf -v service '%s,' 10.96.0.{1..253}
export no_proxy="cisco.com,${lan%,},${service%,},${pool%,},";
export NO_PROXY=$no_proxy

Calico.yaml Configuration

Obtain the latest calico.yaml (I used this one from a tutorial – https://github.com/gunjan5/calico-tutorials/blob/master/kubeadm/calico.yaml – commit a10bfd1d, but you may have success with http://docs.projectcalico.org/master/getting-started/kubernetes/installation/hosted/, I just haven’t tried it, or sorted out the differences).

Two changes are needed to this file. The etcd_endpoints needs to specify the host IP, and the ippool cidr should be changed from /16 to /24, so that we have a manageable number of no_proxy entries.

Since we are changing the default subnet for services, I changed /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to use for cluster-dns arg of KUBELET_DNS_ARGS environment setting. Be sure to restart the systemd service (systemctl daemon-reexec) after making this change. Otherwise, when you start up the cluster, the services will show the new 10.20.30.x IP addresses, but the kubelet process will still have the default –cluster-dns value of This threw me for a while, until Ghe Rivero mentioned this on the KubeAdm slack channel (thanks!).

Update: If you stick with for cluster-dns, you don’t need to change 10-kubeadm.conf (skip the previous paragraph).

Are We There Yet?

Hopefully, I have everything prepared (I’ll know next time I try to set up from scratch). If so, here are the steps used to start things up (as root user!):

kubeadm init --api-advertise-addresses= --service-cidr=

Update: If you use the alternative method for service subnet, you’ll use –service-cidr=, and the IPs will be difference in “kubectl get svc” command below.

This will display the kubeadm join command, for other nodes to be added to  cluster (I haven’t tried that yet for this setup).

kubectl taint nodes --all dedicated-
kubectl apply -f calico.yaml
kubectl get pods --all-namespaces -o wide

At this point (after some time), you should be able to see that all the pods are up, and have and IP address of the host, except for the DNS pod, which will have an IP from the pool:

[root@bxb-ds-52 calico]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   calico-etcd-wk533                           1/1       Running   0          7m     bxb-ds-52
kube-system   calico-node-qxh84                           2/2       Running   0          7m     bxb-ds-52
kube-system   calico-policy-controller-2087702136-n19jf   1/1       Running   0          7m     bxb-ds-52
kube-system   dummy-2088944543-3sdlj                      1/1       Running   0          31m     bxb-ds-52
kube-system   etcd-bxb-ds-52                              1/1       Running   0          31m     bxb-ds-52
kube-system   kube-apiserver-bxb-ds-52                    1/1       Running   0          31m     bxb-ds-52
kube-system   kube-controller-manager-bxb-ds-52           1/1       Running   0          31m     bxb-ds-52
kube-system   kube-discovery-1769846148-lb51s             1/1       Running   0          31m     bxb-ds-52
kube-system   kube-dns-2924299975-c95bg                   4/4       Running   0          31m   bxb-ds-52
kube-system   kube-proxy-n0pld                            1/1       Running   0          31m     bxb-ds-52
kube-system   kube-scheduler-bxb-ds-52                    1/1       Running   0          31m     bxb-ds-52

You can also check that the services are in the service pool defined:

[root@bxb-ds-52 calico]# kubectl get svc --all-namespaces -o wide

default       kubernetes    <none>        443/TCP         32m       <none>
kube-system   calico-etcd    <nodes>       6666/TCP        8m        k8s-app=calico-etcd
kube-system   kube-dns   <none>        53/UDP,53/TCP   31m       name=kube-dns

Now, you should be able to use kubectl to apply manifests for containers (I did one with NGINX), and verify that the container can ping other containers, the host, and other nodes on the host’s network.

What’s Next

I want to try to…

  • Joining a second node and see if containers are placed there correctly.
  • Retrying this process from scratch, to make sure this blog reported all the steps.


Tags: , ,
Copyright 2017-2018. All rights reserved.

Posted February 14, 2017 by pcm in category "Kubernetes