Part IX: Load Balancer and Ingress
Load Balancer
Ref: https://metallb.universe.tf/
In lieu of having a physical load balancer, this cluster will use MetalLB as a load balancer. In my network, I have a block of IP addresses reserved for DHCP, and picked a range of IPs to use for load balancer IPs in the cluster.
The first thing to do, is to get the latest release of MetalLB:
cd ~/workspace/picluster poetry shell mkdir -p ~/workspace/picluster/metallb cd ~/workspace/picluster/metallb MetalLB_RTAG=$(curl -s https://api.github.com/repos/metallb/metallb/releases/latest|grep tag_name|cut -d '"' -f 4|sed 's/v//') echo $MetalLB_RTAG 0.13.12
Obtain the version, install it, and wait for everything to come up:
wget https://raw.githubusercontent.com/metallb/metallb/v${MetalLB_RTAG}/config/manifests/metallb-native.yaml -O metallb-native-${MetalLB_RTAG}.yaml kubectl apply -f metallb-native-${MetalLB_RTAG}.yaml kubectl get pods -n metallb-system --watch kubectl get all -n metallb-system
Everything should be running, but needs to be configured for this cluster. Specifically, we need to setup and advertise the address pool(s), which can be a CIDR, address range, and IPv4 and/or IPv6 addresses. For our case, I’m reserving 10.11.12.201 – 10.11.12.210 for load balancer IPs and using L2 advertisement (ipaddress_pool.yaml):
apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: production namespace: metallb-system spec: addresses: - 10.11.12.201-10.11.12.210 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2-advert namespace: metallb-system
Apply this configuration, and examine the configuration:
kubectl apply -f ipaddress_pools.yaml ipaddresspool.metallb.io/production created l2advertisement.metallb.io/l2-advert created kubectl get ipaddresspools.metallb.io -n metallb-system NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES production true false ["10.11.12.201-10.11.12.210"] kubectl get l2advertisements.metallb.io -n metallb-system NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES l2-advert kubectl describe ipaddresspools.metallb.io production -n metallb-system Name: production Namespace: metallb-system Labels: <none> Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: Creation Timestamp: 2024-01-17T19:05:29Z Generation: 1 Resource Version: 3648847 UID: 38491c8a-fdc1-47eb-9299-0f6626845e82 Spec: Addresses: 10.11.12.201-10.11.12.210 Auto Assign: true Avoid Buggy I Ps: false Events: <none>
Note: if you don’t want IP addresses auto-assigned, you can add the clause “autoAssign: false”, to the “spec:” section of the IPAddressPool.
To use the load balancer, you can change the type under the “spec:” section from ClusterIP or NodePort to LoadBalancer, by editing the configuration. For example, to change Grafana from NodePort to LoadBalancer, one would use the following to edit the configuration:
kubectl edit -n monitoring svc/prometheusstack-grafana
This is located at the bottom of the file:
... spec: clusterIP: 10.233.22.171 clusterIPs: - 10.233.22.171 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: http-web nodePort: 32589 port: 80 protocol: TCP targetPort: 3000 selector: app.kubernetes.io/instance: prometheusstack app.kubernetes.io/name: grafana sessionAffinity: None type: NodePort status: loadBalancer: {}
When you show the service, you’ll see the load balancer IP that was assigned:
kubectl get svc -n monitoring prometheusstack-grafana NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheusstack-grafana LoadBalancer 10.233.22.171 10.11.12.201 80:32589/TCP 5d23h
Here is a sample deployment (web-demo-test.yaml) to try. IUt has the LoadBalancer type specified:
apiVersion: v1 kind: Namespace metadata: name: web --- apiVersion: apps/v1 kind: Deployment metadata: name: web-server namespace: web spec: selector: matchLabels: app: web template: metadata: labels: app: web spec: containers: - name: httpd image: httpd:alpine ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: web-server-service namespace: web spec: selector: app: web ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer
Apply the configuration and check the IP address:
kubectl apply -f web-app-demo.yaml kubectl get svc -n web
From the command line, you can do “curl http://IP_ADDRESS” to make sure it works. If you want a specific IP address, you can change the above web-app-demo.yaml to add the following line after the type (note the same indentation level):
type: LoadBalancer loadBalancerIP: 10.11.12.205
Uninstalling MetalLB
Before removing MetalLB, you should change any services that are using it, to go back to NodePort or ClusterIP as the type. Then, delete the configuration:
kubectl delete -f metallb-native-${MetalLB_RTAG}.yaml
NGINX Ingress
Ref: https://docs.nginx.com/nginx-ingress-controller/technical-specifications/
Ref: https://kubernetes.github.io/ingress-nginx/deploy/
With Load Balancer setup and running, we’ll create an Ingress controller using NGINX. You can view the compatibility chart here to select the NGINX version desired. For our purposes, we’ll use helm chart install, so that we have sources and can delete/update CRDs. I’m currently running Kubernetes 1.28, so either 1.02 or 1.1.2 of the Helm Chart. Let’s pull the charts for 1.1.2:
cd ~/workspace/picluster/ helm pull oci://ghcr.io/nginxinc/charts/nginx-ingress --untar --version 1.1.2 cd nginx-ingress
Install NGINX Ingress with:
helm install my-nginx --create-namespace -n nginx-ingress . NAME: my-nginx LAST DEPLOYED: Fri Jan 19 11:14:16 2024 NAMESPACE: nginx-ingress STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The NGINX Ingress Controller has been installed.
If you want to customize settings, you can add the “–values values.yaml” argument, after first getting the list of options using the following command, and then modifying them:
helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx > values.yaml
The NGINX service will have an external IP address, as the type is LoadBalancer, and MetalLB will assign an address from the pool (note: you can specify an IP to use in values.yaml).
To test this out, we’ll great a web based app:
kubectl create deployment demo --image=httpd --port=80 kubectl expose deployment demo
We can then create an ingress entry for a local (dummy) domain and forward port 8080 to the default port (80) for the app:
kubectl create ingress demo-localhost --class=nginx --rule="demo.localdev.me/*=demo:80" kubectl port-forward --namespace=nginx-ingress service/my-nginx-nginx-ingress-controller 8080:80 &
To test this out, you can try accessing the URL:
curl http://demo.localdev.me:8080 Handling connection for 8080 <html><body><h1>It works!</h1></body></html>
If you have a publicly visible domain, you can forward that to the app. I have not tried it, but it looks like the ingress command would look like:
kubectl create ingress demo --class=nginx --rule YOUR.DOMAIN.COM/=demo:80
Here is an example if doing path based routing of requests. First, create two pods and services that would handle request:
In apple.yaml:
kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
ports:
- port: 5678 # Default port for image
In banana.yaml:
kind: Pod
apiVersion: v1
metadata:
name: banana-app
labels:
app: banana
spec:
containers:
- name: banana-app
image: hashicorp/http-echo
args:
- "-text=banana"
---
kind: Service
apiVersion: v1
metadata:
name: banana-service
spec:
selector:
app: banana
ports:
- port: 5678 # Default port for image
Then, create an ingress-demo.yaml that will redirect requests:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- http:
paths:
- path: /apple
pathType: Prefix
backend:
service:
name: apple-service
port:
number: 5678
- path: /banana
pathType: Prefix
backend:
service:
name: banana-service
port:
number: 5678
Apply the three YAML files. To test, you can access Ingress service IP (10.11.12.201 in this example) or any node IP with the prefix:
curl http://10.11.12.201/apple
apple
curl http://10.11.12.201/banana
banana
curl http://10.11.12.201/unknown
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
This redirects the request to the apple service/pod.
To remove NGINX Ingress, you can use “helm delete”.
Upgrading NGINX Ingress (not tried)
You must manually update the CRDs, before upgrading NGINX. Pull the new release and then apply the updated CRDs:
cd ~/workspace/picluster helm pull oci://ghcr.io/nginxinc/charts/nginx-ingress --untar --version VERSION_DESIRED cd nginx-ingress kubectl apply -f crds/
You may see this warning, but it can be ignored:
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl ap
ply
You should check the release notes, for any other specific actions needed for a new release. You can then upgrade NGINX:
helm upgrade my-nginx .
FYI: At the bottom of the NGINX install page, there are notes on how to upgrade without downtime.
Uninstalling NGINX Ingress
To uninstall, remove the CRDs and then uninstall with Helm, using the name specified, when the cluster was created:
kubectl delete -f ~/workspace/picluster/nginx-ingress/crds/ helm uninstall my-nginx -n nginx-ingress