March 7

Kubernetes with Contiv plugin on bare-metal

Preparations

I used three Cisco UCS systems for the basis of the Kubernetes cluster and followed the preparation and proxy steps in the blog https://blog.michali.net/2017/02/14/kubernetes-on-a-lab-system-behind-firewall/ to get the systems ready for use.

With that setup, the systems had a pair of VNIC interfaces (a0, a1), joined into a bonded interface (a), and an associated bridge (br_api) on the UCS. There are two physical interfaces that go to a pair of Top of Rack (ToR) set up as a port-channel, for connectivity between systems. It could have been done with a single interface, but that’s what I had already in the lab.

For Contiv, we want a second interface to use for the tenant network, so I modified the configuration of each of the three systems to add another pair of interfaces (t0,t1), and a master interface to bond them together (t). In the CIMC console for the UCS systems, I added another pair of VNICs, t0 and t1, selected trunk mode,  and made sure the MAC addresses matched the HWADDR in the /etc/sysconfig/network-scripts/ifcfg-t* files in CentOS.  Again, a single interface could be used, instead of a bonded pair like what I had.

Since this is a lab system that is behind a firewall, I modified the no_proxy entries in .bashrc on each node to use:

printf -v lan '%s,' "10.87.49.77,10.87.49.78,10.87.49.79"
printf -v service '%s,' 10.254.0.{2..253}
export no_proxy="cisco.com,${lan%,},${service%,},127.0.0.1";

 

Effectively, all the IPs for the nodes (10.87.49.x) and the service subnet IPs (10.254.0.0/24 – note smaller than default /16 subnet). In addition, on each system, I made sure there was an /etc/hosts entry for each of the three nodes I’m using.

Besides installing Kubernetes, I also installed “net-tools” on each node.

 

Kubernetes Startup

KubeAdm is used to startup the cluster with the IP of the name interface for this master node, forcing v1.4.7 Kubernetes, and using the default service CIDR, but with a smaller range (so that fewer no-proxy entries needed):

kubeadm init --api-advertise-addresses=10.87.49.77 --use-kubernetes-version v1.4.7 --service-cidr 10.254.0.0/24
kubectl taint nodes --all dedicated-
kubectl get pods --all-namespaces -o wide

 

Save the join command output, so that the worker nodes can be joined later.

All of the services should be up, except for DNS, which, since this first trial will use L2, this will be removed. We’ve removed the taint on this master node, so it too can be a worker.

 

Contiv Preparation

We’ll pull down the version of Contiv that we want to work with, and will run the install.sh script:

export VERSION=1.0.0-beta.3
curl -L -O https://github.com/contiv/install/releases/download/$VERSION/contiv-$VERSION.tgz
tar xf contiv-$VERSION.tgz
cd contiv-$VERSION

./install/k8s/install.sh -n 10.87.49.77 -v t

 

This will use the 10.87.49.77 node (the one I’m on and will use as the netmaster), and will use interface t (the tenant interface that I created above) for tenant interface. The script installs netctl in /usr/bin, so that it can be used for network management, and it builds a .contiv-yaml file in the directory and applies it to the cluster.

Note that there are now Contiv pods running, and the DNS pod is gone.

 

Trying It Out

On each of the worker nodes, run the join command. Verify on the master, that the nodes are ready (kubectl get nodes) and that a Contiv netplugin and proxy pods for each of the workers are running (kubectl get pods –all-namespaces). On the master, there should be kubernetes and kube-dns services running (kubectl get svc –all-namespaces).

Using netctl, create a default network using VXLAN. First must set an environment variable, so the netctl can communicate with the master:

export NETMASTER=http://10.87.49.77:9999
netctl net create -t default --subnet=20.1.1.0/24 default-net

 

Next, create a manifest for some pods and apply them. I used nginx with four replicas, and verified that the pods were all running, dispersed over the three nodes, and all had IP addresses. I could ping from pod to pod, but not from node to pod (expected, as not supported at this time).

If desired, you can create a network using VLANs and then add a label “io.contiv.network: network-name” to the manifest to create pods on that network. For example, I created a network with VLAN 3280 (which was an allowed VLAN on the ToR port-channel):

netctl network create --encap=vlan --pkt-tag=3280 --subnet=10.100.100.215-10.100.100.220/27 --gateway=10.100.100.193 vlan3280

 

Then, in the manifest, I added:

metadata:
...
labels:
app: demo-labels
io.contiv.network: vlan3280

 

Once the manifest is applied, the pods should come up and have IP addresses. You can docker exec into the pods and ping from pod to pod. As with VXLAN, I cannot ping from node to pod.

Note: I did have a case where pods on one of the nodes were not getting an IP address and were showing this error, when doing a “kubectl describe pod”:

  6m        1s      105 {kubelet devstack-77}           Warning     FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "nginx-vlan-2501561640-f7vi1_default" with SetupNetworkError: "Failed to setup network for pod \"nginx-vlan-2501561640-f7vi1_default(68cd1fb3-0376-11e7-9c6d-003a7d69f73c)\" using network plugins \"cni\": Contiv:Post http://localhost/ContivCNI.AddPod: dial unix /run/contiv/contiv-cni.sock: connect: no such file or directory; Skipping pod"

 

It looks like there were OVS bridges hanging around from failed attempts. Contiv folks mentioned this pull request for the issue – https://github.com/contiv/install/pull/62/files#diff-c07ea516fee8c7edc505b662327701f4. Until this change is available, the contiv.yaml file can be modified to add the -x option. Just go to ~/contiv/contiv-$VERSION/install/k8s/contiv.yaml and add in the -x option for netplugin.

        - name: contiv-netplugin
          image: contiv/netplugin:__CONTIV_VERSION__
          args:
            - -pkubernetes
            - -x
          env:


Once this file is modified, then you can do the Contiv Preparation steps above and run the install.sh script with this change.

 

Update: I was using service CIDR of 10.96.0.0/24, but Contiv folks indicated that I should be using 10.254.0.0/24 (I guess useful for Kubernetes services using service type ClusterIP). I updated this page, but haven’t retested – yet.

 

Tags: , , ,
Copyright 2017-2024. All rights reserved.

Posted March 7, 2017 by pcm in category "Kubernetes