Istio, Kubernetes with Load Balancer, on Bare Metal…Oh My!
V1.0 01/23/2018
I found a load balancer that works on bare-metal and decided to do a quick write-up of my findings. This blog assumes that you have a basic understanding on how to bring up Kubernetes and Istio, so I won’t go into the nitty gritty details on those steps.
Preparations
The following information indicates the versions used (others may work – this is just what I used), and the basic infrastructure.
For hardware, I used two Cisco UCS blades as the hosts for my cluster, with one acting as master and one acting as a minion. On each system, the following was installed/setup…
- Ubuntu 16.04 64 bit server OS.
- Go version 1.9.2.
- KubeAdm, kubelet, and kubectl v1.9.2.
- Docker version 17.03.2-ce.
- Account set up on hub.docker.com for docker registry.
- Using Istio master branch, cloned on January 22nd 2018 (commit 23306b5)
- Hosts on lab network with access externally.
- Four available IPs for external IP pool.
Step 1: Bring Up KubeAdm
For Kubernetes, I used the reference bridge plugin, which needs a CNI config file and static route on each host. On the minion, I did this:
cat >/etc/cni/net.d/cni2.conf<<EOT { "cniVersion": "0.3.0", "name": "dindnet", "type": "bridge", "bridge": "dind0", "isDefaultGateway": true, "ipMasq": false, "hairpinMode": true, "ipam": { "type": "host-local", "ranges": [ [ { "subnet": "10.193.0.0/16", "gateway": "10.193.0.1" } ] ] } } sudo ip route add 10.192.0.0/16 via <ip-of-master>
On the master, I did:
cat >/etc/cni/net.d/cni2.conf<<EOT { "cniVersion": "0.3.0", "name": "dindnet", "type": "bridge", "bridge": "dind0", "isDefaultGateway": true, "ipMasq": false, "hairpinMode": true, "ipam": { "type": "host-local", "ranges": [ [ { "subnet": "10.192.0.0/16", "gateway": "10.192.0.1" } ] ] } } EOT sudo ip route add 10.193.0.0/16 via <ip-of-minion>
On the master, I created this kubeadm.conf file, which has configuration lines for Istio (and specifies the IP of the master for advertised address):
cat >kubeadm.conf<<EOT apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration kubernetesVersion: v1.9.0 api: advertiseAddress: "<ip-of-master>" networking: serviceSubnet: "10.96.0.0/12" tokenTTL: 0s apiServerExtraArgs: insecure-bind-address: "0.0.0.0" insecure-port: "8080" runtime-config: "admissionregistration.k8s.io/v1alpha1" feature-gates: AllAlpha=true EOT
With all the pieces in place, the master node was brought up with:
sudo kubeadm init --config kubeadm.conf
Then, the minion was joined by using the command output from the init invocation on the master (using sudo). Back on the master, I did the obligatory commands to access the cluster with kubectl, and made sure everything was up OK:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step 2: Start Up Load Balancer
I cloned the repo for MetalLB and then applied the metallb.yaml file, but you can do what the install page shows:
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.3.1/manifests/metallb.yaml
I decided to use the ARP method, instead of BGP, as the setup is super easy. Using the example they provided, I created this config file:
apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: my-ip-space protocol: arp arp-network: <start-ip-of-subnet>/26 cidr: - <start-ip-of-pool>/30
If my systems were on a /24 subnet, I wouldn’t have needed the arp-network line. Under “cidr” the start address of the pool and the prefix is specified.
I applied the yaml file using kubectl and made sure that the metallb controller and speaker pods were running. You can check the log of the speaker to ensure that things started up OK, and later to see if IPs are being assigned:
kubectl logs -l app=speaker -n metallb-system
Step 3: Start Up Istio
I followed the instructions in the Istio Dev Guide page, to build and start up Istio.
The repo was pulled and a branch created based on the latest from the master branch. I built the code and pushed to my docker repository, ran updateVersion.sh, and then started Istio with:
kubectl apply -f install/kubernetes/istio.yaml kubectl apply -f install/kubernetes/istio-initializer.yaml
I verified that everything was runing, and that the istio-ingress service was using the LoadBalancer type and had the first IP address from the pool defined for MetalLB as the external IP. The speaker log for metalLB will show that an IP was assigned.
Step 4: BookInfo
We would be remiss, if we didn’t start up the book info application and then try to access the product page using the external address:
kubectl apply -f samples/bookinfo/kube/bookinfo.yaml
After this is running, I opened my browser window on my laptop, and went to http://<external-ip>/productpage/ to view the app!
Ramblings
The MetalLB setup was painless and worked well. I haven’t tried with /31 for the pool, but it does work with /29 and I suspect larger sizes. I also didn’t try using BGP, instead of ARP.