February 17

Kubernetes/Calico plugin with IPv6 on bare-metal

Documenting a setup for investigating Kubernetes with IPv6 in a lab environment. This builds off of notes for using KubeAdm for Kubernetes with Calico plugin on a bare-metal system, which is behind a firewall in a lab (https://blog.michali.net/2017/02/14/kubernetes-on-a-…-behind-firewall).

These notes should work for Ubuntu 16.04, in addition to CentOS, which was what was used in that blog.

Preparation

In the prior blog, the no-proxy environment variable was setup and the cluster was initialized using an alternate subnet (10.20.30.x/24). Later, i found that it is easier to use the original subnet and just reduce the size. I used the alternative setup, added to that blog as an update.

When trying to switch to IPv6, you’ll need the calicoctl command. The easiest way is to install the calicoctl binary (as root):

curl -L --silent https://github.com/projectcalico/calico-containers/releases/download/v1.0.0/calicoctl -o /usr/local/bin/calicoctl
chmod +x /usr/local/bin/calicoctl

Otherwise, you can install go, pull the sources, build and install calicoctl (see end of blog for details).

Starting Up the Cluster

When the cluster is initialized, you can use:

kubeadm init --api-advertise-addresses=10.87.49.77 --service-cidr=10.96.0.0/24

 

Before applying the calico.yaml, there are additional changes needed. As mentioned in the other blog, the etcd_endpoints and ippool need to be modified. Beyond that, you need to make sure that you have the CNI code with the fix from commit b8fc5928 (merged 2/16/2017), which fixes issue #273. I did that by changing the CNI image line:

     image: quay.io/calico/cni:latest

 

This fixes a problem where some kernels were not honoring the FlagUp option, when creating the veth interfaces.

From this point on, you can apply calico.yaml, and then follow the steps in https://blog.michali.net/2017/02/11/using-kubeadm-and-calico-plugin-for-ipv6-addresses/ under “Reconfiguring for IPv6” to enable Ipv6 for future pod creation. Remember to use  “kubectl get svc –all-namespace” to obtain the IP and port for etcd and set the ETCD_ENDPOINTS environment variable, as the calicoctl command will work without this, but will not be accessing the correct key-store entries.

Notes

In the pod, I see these interfaces:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
 link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
 link/ether 6a:76:fc:00:4b:cc brd ff:ff:ff:ff:ff:ff
 inet6 2001:2::6d47:e62d:8139:d1c0/128 scope global
 valid_lft forever preferred_lft forever
 inet6 fe80::6876:fcff:fe00:4bcc/64 scope link
 valid_lft forever preferred_lft forever

There are these routes:

2001:2::6d47:e62d:8139:d1c0 dev eth0 proto kernel metric 256
fe80::/64 dev eth0 proto kernel metric 256
default via fe80::c8e9:11ff:fe2c:c809 dev eth0 metric 1024

On the host, there is this related IP address:

22: cali1500372f1da@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
 link/ether ca:e9:11:2c:c8:09 brd ff:ff:ff:ff:ff:ff link-netnsid 3
 inet6 fe80::c8e9:11ff:fe2c:c809/64 scope link
 valid_lft forever preferred_lft forever

With these related routes:

2001:2::6d47:e62d:8139:d1c0 dev cali1500372f1da metric 1024
blackhole 2001:2::6d47:e62d:8139:d1c0/122 dev lo proto bird metric 1024 error -22

I did see one system where I could not ping between pods or pod and host, with IPv6 addresses. What I noticed was that, on that system, the cali# interfaces created, although up, did not have a Link Local Address. The pod, had a route to a LLA, which on another system (that worked), it was for the cali# interface. Need to investigate what is wrong on this system.

Manually Building Calicoctl

IF you want to do this the hard way, you can manually build and install the calicoctl tool. First I installed Go on the system:

curl -O http://storage.googleapis.com/golang/go1.7.4.linux-amd64.tar.gz

tar -xvf go1.7.4.linux-amd64.tar.gz
sudo mv go /usr/local

In ~/.bashrc add:

export PATH=/usr/local/go/bin:$PATH

To use,  set GOPATH to the top of a work area for source and add it to the path in your .bashrc file (and re-source it so that your environment is up to date):

export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin

Now, calicoctl can be installed (detailed instructions https://github.com/projectcalico/calicoctl). Here is a summary of the steps:

mkdir -p ~/go/src/github.com/projectcalico
git clone https://github.com/projectcalico/calicoctl.git $GOPATH/src/github.com/projectcalico/calicoctl

Install glide:

mkdir $GOPATH/bin
curl https://glide.sh/get | sh
cd ~/go/src/github.com/projectcalico/calicoctl
glide install -strip-vendor
make binary
cd $GOPATH
go build src/github.com/projectcalico/calicoctl/calicoctl/calicoctl.go
mv calicoctl bin/
sudo cp bin/calicoctl /usr/local/bin
sudo chmod 755 /usr/local/bin/calicoctl

 

Category: Kubernetes | Comments Off on Kubernetes/Calico plugin with IPv6 on bare-metal
February 14

Installing Go

To install Go 1.7.4 on Linux, I did these steps…

curl -O http://storage.googleapis.com/golang/go1.7.4.linux-amd64.tar.gz
tar -xvf go1.7.4.linux-amd64.tar.gz
sudo mv go /usr/local

In ~/.bashrc add:

export PATH=/usr/local/go/bin:$PATH
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin

Re-source the .bashrc to obtain the settings. Packages can be installed like this…

go get -u github.com/tools/godep
go get -u github.com/jteeuwen/go-bindata/go-bindata
go get -u github.com/nsf/gocode
go get golang.org/x/tools/cmd/guru
go get github.com/rogpeppe/godef
Category: Go | Comments Off on Installing Go
February 14

Kubernetes on a lab system behind firewall

After numerous tries, I think I finally came across a setup that will allow me to run Kubernetes (via KubeAdm), using the Calico plugin, and a bare metal system, that is behind a firewall and needed proxy to access the outside. This blog describes the process I used to get this to work.

Preparation for CentOS

On the bare metal system (a Cisco UCS), running CentOS 7.3, the needed packages need to be installed. First, is to update with the kubernetes repo:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF

I ran “yum update -y” to update the system. Next, the packages need to be installed. Note: I had set up this system weeks before, so hopefully I’ve captured all the steps (if not, let me know):

setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni

I did recall at one point of hitting a conflict with the docker install, with what was on the system (maybe from mucking around on this system installing things before). In any case, make sure docker is installed and working. In my system, “docker version” shows 1.13. You may want to check “docker version” first, and if already installed, skip trying to reinstall.

Preparation for Ubuntu 16.04

For Ubuntu, the Kubernetes repo needs to be added along with keys, and then everything installed.

sudo su
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo deb http://apt.kubernetes.io/ kubernetes-xenial main >> /etc/apt/sources.list.d/kubernetes.list
apt-get update -y
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Proxy Setup

With everything installed (I hope :)), I next set the proxy up with http_proxy and https_proxy (lower and uppercase environment variables) pointing to the proxy server, and no_proxy set to IPs that should not go through the proxy server. For this system, no_proxy had the host IP, 127.0.0.1, and then the IPs for the IPv4 pool and IPs for the service IPs. The defaults use large subnets, so I reduced these to help make the no-proxy setting more manageable.

For the IPv4 pool, I’m using 192.168.0.0/24 (reduced size from default), and for the service IP subnet, I’m using 10.20.30.0/24 (instead of 10.96.0.0/12). I used these lines in .bashrc to create the no_proxy setting:

printf -v lan '%s,' 10.86.7.206
printf -v pool '%s,' 192.168.0.{1..253}
printf -v service '%s,' 10.20.30.{1..253}
export no_proxy="cisco.com,${lan%,},${service%,},${pool%,},127.0.0.1";
export NO_PROXY=$no_proxy

Make sure you’ve got these environment variables sourced.

Update: Alternative Proxy Setup

You can keep the default 10.96.0.10 IP in 10-kubeadm.conf, and instead use “–service-cidr=10.96.0.0/24” on the kubeadm init line, to reduce the size of the subnet.

In the .bashrc file, use this for service pool:

printf -v lan '%s,' 10.87.49.77
printf -v pool '%s,' 192.168.0.{1..253}
printf -v service '%s,' 10.96.0.{1..253}
export no_proxy="cisco.com,${lan%,},${service%,},${pool%,},127.0.0.1";
export NO_PROXY=$no_proxy

Calico.yaml Configuration

Obtain the latest calico.yaml (I used this one from a tutorial – https://github.com/gunjan5/calico-tutorials/blob/master/kubeadm/calico.yaml – commit a10bfd1d, but you may have success with http://docs.projectcalico.org/master/getting-started/kubernetes/installation/hosted/, I just haven’t tried it, or sorted out the differences).

Two changes are needed to this file. The etcd_endpoints needs to specify the host IP, and the ippool cidr should be changed from /16 to /24, so that we have a manageable number of no_proxy entries.

Since we are changing the default subnet for services, I changed /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to use 10.20.30.10 for cluster-dns arg of KUBELET_DNS_ARGS environment setting. Be sure to restart the systemd service (systemctl daemon-reexec) after making this change. Otherwise, when you start up the cluster, the services will show the new 10.20.30.x IP addresses, but the kubelet process will still have the default –cluster-dns value of 10.96.0.10. This threw me for a while, until Ghe Rivero mentioned this on the KubeAdm slack channel (thanks!).

Update: If you stick with 10.96.0.10 for cluster-dns, you don’t need to change 10-kubeadm.conf (skip the previous paragraph).

Are We There Yet?

Hopefully, I have everything prepared (I’ll know next time I try to set up from scratch). If so, here are the steps used to start things up (as root user!):

kubeadm init --api-advertise-addresses=10.86.7.206 --service-cidr=10.20.30.0/24

Update: If you use the alternative method for service subnet, you’ll use –service-cidr=10.96.0.0/24, and the IPs will be difference in “kubectl get svc” command below.

This will display the kubeadm join command, for other nodes to be added to  cluster (I haven’t tried that yet for this setup).

kubectl taint nodes --all dedicated-
kubectl apply -f calico.yaml
kubectl get pods --all-namespaces -o wide

At this point (after some time), you should be able to see that all the pods are up, and have and IP address of the host, except for the DNS pod, which will have an IP from the 192.168.0.0/24 pool:

[root@bxb-ds-52 calico]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   calico-etcd-wk533                           1/1       Running   0          7m        10.86.7.206     bxb-ds-52
kube-system   calico-node-qxh84                           2/2       Running   0          7m        10.86.7.206     bxb-ds-52
kube-system   calico-policy-controller-2087702136-n19jf   1/1       Running   0          7m        10.86.7.206     bxb-ds-52
kube-system   dummy-2088944543-3sdlj                      1/1       Running   0          31m       10.86.7.206     bxb-ds-52
kube-system   etcd-bxb-ds-52                              1/1       Running   0          31m       10.86.7.206     bxb-ds-52
kube-system   kube-apiserver-bxb-ds-52                    1/1       Running   0          31m       10.86.7.206     bxb-ds-52
kube-system   kube-controller-manager-bxb-ds-52           1/1       Running   0          31m       10.86.7.206     bxb-ds-52
kube-system   kube-discovery-1769846148-lb51s             1/1       Running   0          31m       10.86.7.206     bxb-ds-52
kube-system   kube-dns-2924299975-c95bg                   4/4       Running   0          31m       192.168.0.128   bxb-ds-52
kube-system   kube-proxy-n0pld                            1/1       Running   0          31m       10.86.7.206     bxb-ds-52
kube-system   kube-scheduler-bxb-ds-52                    1/1       Running   0          31m       10.86.7.206     bxb-ds-52

You can also check that the services are in the service pool defined:

[root@bxb-ds-52 calico]# kubectl get svc --all-namespaces -o wide

NAMESPACE     NAME          CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE       SELECTOR
default       kubernetes    10.20.30.1    <none>        443/TCP         32m       <none>
kube-system   calico-etcd   10.20.30.2    <nodes>       6666/TCP        8m        k8s-app=calico-etcd
kube-system   kube-dns      10.20.30.10   <none>        53/UDP,53/TCP   31m       name=kube-dns

Now, you should be able to use kubectl to apply manifests for containers (I did one with NGINX), and verify that the container can ping other containers, the host, and other nodes on the host’s network.

What’s Next

I want to try to…

  • Joining a second node and see if containers are placed there correctly.
  • Retrying this process from scratch, to make sure this blog reported all the steps.

 

Category: Kubernetes | Comments Off on Kubernetes on a lab system behind firewall
February 13

Update on KubeAdm with Calico

After playing with this a bit, I did a few tweaks, based on some discussions with Calico folks. First, the host IPs being used for the nodes are 10.96.0.101 and 10.96.0.102. This is within the same subnet as is used by Kubernetes for the service IPs (10.96.0.0/12). To get around this, I modified the Vagrantfile to use a different IPs for the host nodes created.

An alternative is to use the option “–service-cidr” on “kubeadm init” to pick a different range for the service subnet, and modify /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to set the IP for DNS to be within the range (restarting systemd to apply). If you use a manifest from master, you may need additional settings (setting clusterIP – I haven’t tried that). This is a more manual method though.

For my tests, I changed the Vagrantfile as follows:

primary_ip = "10.20.30."

...

      ip = "#{primary_ip}#{i * 10}"

The calico.yaml file needs to be modified too, to use 10.20.30.10 as the etcd_endpoints IP, instead of 10.96.0.101. From this point, you can do a “vagrant up” and then follow the rest of the steps to create the cluster and then switch to IPv6 and create containers.

Note: these same changes apply to the CentOS Vagrantfile in the blog.

Category: Kubernetes | Comments Off on Update on KubeAdm with Calico
February 11

Vagrantfile for KubeAdm/Calico using CentOS

This is an alternate Vagrantfile that uses CentOS 7, instead of Ubuntu 16.04 for creating a two node KubeAdm cluster for Kubernetes with Calico plugin. See https://blog.michali.net/2017/02/11/using-kubeadm-an…r-ipv6-addresses/ for info on how this is used. Besides the different image, the provision has some minor changes. Here’s the file contents, which I’ll eventually put into a github repo: Continue reading

Category: Kubernetes | Comments Off on Vagrantfile for KubeAdm/Calico using CentOS
February 11

Using KubeAdm and Calico plugin for IPv6 addresses

In an attempt to bring up a container with an IPv6 address, I’ve hit a method that (is a bit manual, but) works and thought I’d document the process used.

Environment

  • Using Virtualbox 5.0.32 on a MacBook Pro
  • Vagrant version 1.8.7 (says latest is 1.9.1, but I didn’t update)
  • 16 GB RAM (though the two VMs created are 2GB each, so should work for many cases)

I’m assuming that you have Virtualbox and Vagrant installed and have a basic understanding how they both work. If not, Google is your friend… :^)

Preparations

I started with Vagrantfile from Gunjan Patel, who was instrumental in helping me get this working (huge props!):

mkdir -p ~/workspace/k8s
cd !$
git clone https://github.com/gunjan5/calico-tutorials.git
cd calico-tutorials/kubeadm

With this repo (commit a10bfd1d), I made this small change to the end of the Vagrantfile, just because I’m lazy and didn’t want to manually “vagrant scp” the file over to the guest VM:

+ vm_name = "%s-01" % instance_name_prefix
+ config.vm.define vm_name do |host|
+   host.vm.provision "file", source: "calico.yaml", destination: "calico.yaml"
+ end

From this point, I did a “vagrant up” and sat back waiting for it to create, boot, and provision the two nodes.

Bringing Up The Cluster

Once startup is done, I did the following on node-01 (using “vagrant ssh node-01”):

sudo su
kubeadm init --api-advertise-addresses=10.96.0.101

This starts up KubeAdm and uses the eth1 interface, which is 10.96.0.101 on this node, and .102 on the other node. The eth0 interface, which is setup for NAT, is the same on both nodes, so we can’t use that.

The cluster will be created and you’ll get a message with a  unique join line like this:

You can now join any number of machines by running the following on each node:

kubeadm join --token=8d4ac7.bba57d4de9d378ec 10.96.0.101

Note that join line, as you’ll need it for the other node. Next, taint the node so that it can be a worker, and apply the calico.yaml.

kubectl taint nodes --all dedicated-
kubectl apply -f calico.yaml
kubectl get pods --all-namespaces -o wide

Reconfiguring For IPv6

Once all the pods are up (you can check with “kubectl get pods –all-namespaces -o wide”), you want to create the IPv6 pool. First though, the setup the endpoint so that calicoctl can access the etcd database. You can find out the IP address (can be different on each run) and port (should be 6666) for etcd with the command:

kubectl get svc --all-namespaces

NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.96.0.1 <none> 443/TCP 3m
kube-system calico-etcd 10.108.3.145 <nodes> 6666:32379/TCP 54s
kube-system kube-dns 10.96.0.10 <none> 53/UDP,53/TCP 3m

With this information, export the etcd endpoint:

export ETCD_ENDPOINTS=http://10.108.3.145:6666

Now, try “calicoctl get ippools” to see the existing IPv4 pool, which should be 192.168.0.0/16. Create an IPv6 pool (I’ll use 2001:2::/64 for mine), by using this script, adjusting for the IP address you want:

cat > pool.yaml <<EOT
- apiVersion: v1
  kind: ipPool
  metadata:
    cidr: 2001:2::/64
EOT
calicoctl create -f pool.yaml

 

You can run the “get ippools” command again, to verify that the new pool is there:

root@node-01:/home/vagrant# calicoctl get ippools
CIDR
192.168.0.0/16
2001:2::/64

The final step is to change CNI config to use IPv6 instead of IPv4. To do this, edit /etc/cni/net.d/10-calico.conf and add in these two bold lines at the location shown in the file:

    "ipam": {
        "assign_ipv4": "false",
        "assign_ipv6": "true",
        "type": "calico-ipam"
    },

Trying It Out

Now, you can create a container with “kubectl apply -f foo.yaml”, and it’ll have an IPv6 address. Here is an example with Nginx:

cat > nginx.yaml <<EOT
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
EOT
kubectl apply -f nginx.yaml

You can verify that it has the right IP address by using “kubectl get pods –all-namespaces -o wide”.

Using the Second Node

The first thing, is to have the second node join the cluster. SSH into “node-02” and paste in that join command you saved above.

vagrant ssh node-02
sudo su
kubeadm join --token=8d4ac7.bba57d4de9d378ec 10.96.0.101

You can check that the node has been added, by doing “kubectl get nodes” on node-01. At this point, containers created on the second node will be using IPv4 address. To use IPv6 addresses, you need to do the same modification to /etc/cni/net.d/10-calico.conf, as was done on node-01. Once that is done, containers that get created on node-02 will have IPv6 addresses.

What’s Next?

I want to try setting 10-calico.conf and create the IPv6 pool right from the start to see if it will work.

Category: Kubernetes | Comments Off on Using KubeAdm and Calico plugin for IPv6 addresses