July 13

KubeAdm Docker in Docker

In several of my blog posts, I’ve mentioned about using KubeAdm to start up a cluster and then do some development work. Some of the Kubernetes instructions mention using local-up-cluster.sh to bring up a single local cluster.

An alternative is to use Docker in Docker (DinD), where master and two minion nodes are brought up as containers on the host. Inside these “node” containers, there are containers for the cluster components running. For example, in the kube-master container, the controller, API server, scheduler, etc. containers will be running.

DinD supports both local and remote workflows, as well.

 

Using a VM

To run this in a VM (I used Vagrant/VirtualBox on a Mac), you’ll need to setup Ubuntu 16.04 (server in my case). I tried this with CentOS 7, but DinD failed to come up (see below).

Once you have the OS installed, have logged in, you can start the process. First, make sure that everything is up-to-date, and install the “extras” package:

sudo apt-get update -y
sudo apt-get upgrade  -y
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual

 

Next, install Docker by first downloading the keys, and adding the repository:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update -y

 

Check that install will be from right place, by running this command:

apt-cache policy docker-ce

 

Install, and check that it is running:

sudo apt-get install -y docker-ce
sudo systemctl status docker

 

To allow the normal user to run docker commands, without using sudo, do:

sudo usermod -aG docker ${USER}

 

I checked the “docker version” (17.06.0-ce), “docker info | grep Storage” (aufs), and “unamkernel e -a” (4.4.0-51). With everything looking OK, I installed DinD:

mkdir ~/dind
cd ~/dind
wget https://cdn.rawgit.com/Mirantis/kubeadm-dind-cluster/master/fixed/dind-cluster-v1.6.sh
chmod +x dind-cluster-v1.6.sh

 

The cluster can now be brought up with:

./dind-cluster-v1.6.sh up

 

Once, this finishes, you have a three node cluster running in VMs. The output mentions of a dashboard available via a browser, but since I was running Ubuntu server, I couldn’t check that out (I was unable to forward to my host either). You can access the cluster using kubectl with:

export PATH="$HOME/.kubeadm-dind-cluster:$PATH"
kubectl get nodes
kubectl get pods --all-namespaces

 

Using a Bare Metal System

The process is identical as described in the VM case. If the bare metal system is behind a firewall, and a proxy is required,  you’ll run into issues (see below).

 

Problems Seen (and some workarounds, but unresolved)

Running on native MacOS

If you have Docker for Mac installed, you can bring up DinD on native MacOS. However, IPv6 is not yet supported for Docker for Mac, so I didn’t try this method (but others’ have for IPv4).

After installing Docker for Mac (to get docker command), you can wget DinD or clone the DinD repo (see below). Follow the same steps to run DinD, like with a VM.

 

Systems behind firewalls

First, docker will have problems talking to external servers to do pulls, etc. You can setup docker for a proxy server, by creating a file /etc/systemd/system/docker.service.d/http-proxy.conf with lines:

[Service]
Environment="HTTP_PROXY=http://<your-proxy-server>:80/"
Environment="HTTPS_PROXY=http://<your-proxy-server:80/"
Environment="NO_PROXY=localhost,127.0.0.1,<your-host-ip>,.<your-domain>"

 

Use your host and port number for the HTTP_PROXY/HTTPS_PROXY entries, your hosts IP and your domain (preceded by a dot) for the NO_PROXY. You can then reload the daemon and restart docker:

systemctl daemon-reload
systemctl start docker

 

I also set these three environment variables in my .bashrc file, so that they are added to the environment settings. For NO_PROXY, I also included 127.0.0.1, 10.192.0.{1..20}, 10.96.0.{1..20} (service network), and 10.244.0.{1..20} (some IPs on the pod network).

With those environment variable settings, I modified the dind-cluster-v1.6.sh script to add the proxy environment variables to the docker run command in dind:run portion of script:

  # Start the new container.
  docker run \
         -d --privileged \
         -e HTTP_PROXY="${HTTP_PROXY:-}" -e HTTPS_PROXY="${HTTPS_PROXY:-}" \
         --net kubeadm-dind-net \
…

 

This passes in the needed proxy information into the kube-master conatiner, so that external sites could be accessed.

Unfortunately, there is still a problem. The kube-master container’s docker is not setup for proxy access, so pulls fail from inside the container. You can look at the docker logs and see the pulls failing.

A workaround (hack) for now, is to add the same http-proxy.conf file to the kube-master container, reload docker daemon, and restart docker. Eventually, the API server (which was previously exiting), would come up, along with the rest of the cluster.

I suspect that the same issue will occur for all the (inner) containers, so we need a solution that sets up docker correctly for a proxy environment.

 

Using CentOS 7

I have not been successful with this, trying a VM or bare-metal. As DinD is starting up, I see a docker failure. Inside the kube-master container, docker has exited, and displays a message saying “Error starting daemon: error initializing graphdriver: driver not supported”.

Doing some investigation, I see that on the (outer) host, CentOS is using the “devicemapper” storage driver (verus “aufs” for Ubuntu). As of this writing, this is the only driver supported. Inside the kube-master container, the storage driver is “vfs”, which via the scripts, is using “overlay2” (the same as what Ubuntu uses). However, the OS is RHEL 4.8.5. It appears that this driver is not supported.

Update: As of commit 477c3e3, this should be working (I haven’t tested yet). They changed the driver from “overlay2” to “overlay”.

 

Building and Running DinD From Sources

Instead of using the prebuilt scripts, you can clone the DinD repo:

cd
git clone https://github.com/Mirantis/kubeadm-dind-cluster.git ~/dind
cd dind

 

The following environment variables should be set (and having a clone of the Kubernetes repo), to cause things to be built, as part bringing up a cluster:

export BUILD_KUBEADM=y
export BUILD_HYPERKUBE=y
./dind-cluster.sh up

 

You’ll need to do some hacking (as of this writing), to make this work. First, there is an issue with docker 17.06 ce, where the “docker wait” command hangs, if the container doesn’t exist. The workaround for now is to fall back to docker 17.03, instead of 17.06. You can follow the instructions on the Docker site, based on your operating system.

For Ubuntu, you can do “sudo apt-get install docker-ce=<version>” (not sure if that will be just 17.03). I didn’t do that, and instead hacked (as a temp fix) the destroy_container() function in the Kubernetes build/common.sh file.

Second, the dind-cluster.sh script (and the fixed/dind-cluster-v1.5.sh and fixed/dind-cluster-v1.7.sh scripts called from this script), have a line:

go run hack/e2e.go --v --test -check_version_skew=false --test_args='${test_args}'"

 

Apparently, the -check_version_skew argument has been changed to -check-version-skew. You can alter the script(s) to fix this issue.


Copyright 2017-2024. All rights reserved.

Posted July 13, 2017 by pcm in category "Kubernetes