January 16

Part VII: Cluster Backup

We will use Velero to perform full or partial backup and restore of the cluster and will use Minio to provide a local S3 storage areas on another computer on the network (I just used my Mac laptop, but you could use a server or multiple servers instead), to be able to do scheduled backups with Velero.

On the Mac, you can use brew to install Minio and the Minio client (mc):

brew upgrade
brew install minio/stable/minio
brew install minio/stable/mc

Create an area for data storage and an area for Minio:

cd ~/workspace/picluster
poetry shell
cd mkdir -p ~/workspace/minio-data
mkdir -p ~/workspace/picluster/minio

In the ~/workspace/picluster/minio/minio.cfg create this file with the user/password desired(default is minioadmin/minioadmin) and host name (or IP) where server will run (in this example, I use my laptop name):

# MINIO_ROOT_USER and MINIO_ROOT_PASSWORD sets the root account for the MinIO server.
# This user has unrestricted permissions to perform S3 and administrative API operations on any resource in the deployment.
# Omit to use the default values 'minioadmin:minioadmin'.
# MinIO recommends setting non-default values as a best practice, regardless of environment

MINIO_ROOT_USER=minime
MINIO_ROOT_PASSWORD=PASSWORD_YOU_WANT_HERE

# MINIO_VOLUMES sets the storage volume or path to use for the MinIO server.

MINIO_VOLUMES="~/workspace/minio-data"

# MINIO_SERVER_URL sets the hostname of the local machine for use with the MinIO Server
# MinIO assumes your network control plane can correctly resolve this hostname to the local machine

# Uncomment the following line and replace the value with the correct hostname for the local machine and port for the MinIO server (9000 by default).

MINIO_SERVER_URL="http://triunity.home:9000"

Note: There is no way to change the user/password, from the console later.

Create a minio-credentials file with the same user name and password as was done in the minio.cfg file:

[default]
aws_access_key_id = minime
aws_secret_access_key = SAME_PASSWORD_AS_ABOVE

I did “chmod 700” for both minio.cfg and minio-credentials.

Next, create a script(minio-server-start) to start up Minio with the desired settings:

export MINIO_CONFIG_ENV_FILE=./minio.cfg
minio server --console-address :9090 &

When you run this script, it will output will indicate a warning that the local host has all the data and a failure will cause loss of data (duh). It will show the URL for API (port 9000) and console (port 9090), along with the username and password to access. Near the bottom, it will show you an alias command that you should copy and paste. It names the server and provides credentials info. It looks like:

mc alias set 'myminio' 'http://trinity.home:9000' 'minime' 'THE_PASSWORD_FROM CONFIG'

Then do the following to make sure that the server is running the latest code:

mc admin update myminio

In your browser, go to the URL and log in with the username/password. Under Administrator -> Buckets menu on the left panel, create a bucket called “kubernetes”. I haven’t tried, but you can turn on versioning, object locking, and quota.

Ref: https://velero.io/docs/main/contributions/minio/

For the Mac, use brew to install Velero and either note the version or check with the list command (in my case it has 1.12.3):

brew install velero
brew list velero

You can check compatibility of the Velero version you have and the kubernetes version running (and adjust the version used by brew, if needed). The matrix is here. (Optionally) Pull the Velero sources from git, so that we can use examples and have documentation:

cd ~/workspace/picluster
git clone https://github.com/vmware-tanzu/velero.git
cd velero

In the README.md, it will have version compatibility info.

It indicates that velero 1.12.x works with Kubernetes 1.27.3 and 1.13.x with Kubernetes 1.28.3. We have 1.28 Kubernetes, but there is no brew version for Velero 1.13 right now, so we’ll hope 1.12.3 Velero works.

We need the Velero plugin for AWS. The plugins are shown here. For Velero 1.12.x, we need AWS plugin 1.8.x. The plugin tags show that v1.8.2 is the latest.

Next, start up Velero, specifying the plugin version to use, the bucket name you created in Minio (“kubernetes”), the credentials file, Minio as the S3 storage, and the Minio API URL (your host name with port 9000):

velero install \
    --provider aws \
    --plugins velero/velero-plugin-for-aws:v1.8.2 \
    --bucket kubernetes \
    --secret-file ~/workspace/picluster/minio/minio-credentials \
    --use-volume-snapshots=false \
    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://trinity.home:9000

It will display that Velero is installed and that you can use “kubectl logs deployment/velero -n velero” to see the status.

Check that the backup location is available with:

velero backup-location get
NAME      PROVIDER   BUCKET/PREFIX   PHASE       LAST VALIDATED                  ACCESS MODE   DEFAULT
default   aws        kubernetes      Available   2024-01-16 13:15:52 -0500 EST   ReadWrite     true

If you have the Velero git repo pulled, as mentioned above, you can start an example:

cd ~/workspace/picluster/velero
kubectl apply -f examples/nginx-app/base.yaml

kubectl get all -n nginx-example
NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-75b696dc55-25d6r   1/1     Running   0          66s
pod/nginx-deployment-75b696dc55-7h5zx   1/1     Running   0          66s

NAME               TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/my-nginx   LoadBalancer   10.233.46.15   <pending>     80:30270/TCP   66s

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   2/2     2            2           66s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-75b696dc55   2         2         2       66s

You should see the deployment running (Note: there is no external IP, as I don’t have a load balancer running right now). If you want to just backup this application, you can do:

velero backup create nginx-backup --selector app=nginx
velero backup get
NAME           STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
nginx-backup   Completed   0        0          2024-01-16 13:22:53 -0500 EST   29d       default            app=nginx

velero backup describe nginx-backup
velero backup logs nginx-backup

If you look at the “kubernetes” bucket from the Minio console, you’ll see the backup files there. Now, we can delete the application and then restore it…

kubectl delete namespace nginx-example
kubectl get all -n nginx-example
No resources found in nginx-example namespace.

velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20240116132642" submitted successfully.
Run `velero restore describe nginx-backup-20240116132642` or `velero restore logs nginx-backup-20240116132642` for more details.
(picluster-py3.11) pcm@trinity:~/workspace/picluster/velero$ kubectl get all -n nginx-example
NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-75b696dc55-25d6r   1/1     Running   0          3s
pod/nginx-deployment-75b696dc55-7h5zx   1/1     Running   0          3s

NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/my-nginx   LoadBalancer   10.233.16.165   <pending>     80:31834/TCP   3s

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   2/2     2            2           2s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-75b696dc55   2         2         2       3s

You can backup the full cluster with “velero backup create FULL_BACKUP-2024-01-17”, using a name to denote the backup. You can use “velero backup get” to see a list of backups, and “velero restore get” to see a list of restores. You can even schedule backups with a command, like the following:

velero schedule create homek8s --schedule="@every 6h"

I haven’t tried this, because I’m using my laptop, which is not always on.

First, we’ll delete the NGINX app that was installed, and the corresponding Velero backup.

kubectl delete -f examples/nginx-app/base.yaml
velero backup delete nginx-backup
velero restore delete nginx-backup

You can check in Minio to make sure the backups are gone. Next, Velero can be removed from the cluster, along with the CRDs used:

kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero

Lastly, you can delete the “kubernetes” bucket from the Minio console, and kill the Minio process.

I did have a problem at one point (may have been due to an older Minio version), where I deleted a backup from Velero, and it later re-appeared when doing “velero backup get”. I looked at operations with “mc admin trace myminio” and saw that requests were coming into Minio to remove the backup from my “myminio” server, but the bucket was NOT being removed. Velero would later sync with Minio, see the backup and show that it was still there on a later “velero backup get” command.

I found that the following would remove the bucket and everything under:

mc rm --recursive kubernetes --force --dangerous

There is also a “mc rb” command to remove the bucket (have not tried), or an individual backup can be removed with “mc rm RELATIVE/PATH/FROM/BUCKET”, like “kubernetes/backups/nginx-backup” and “kubernetes/restores/nginx-backup-20231205150432”. I think I tried doing it from the Minio UI, but the files were not removed. My guess is that it does an API request to remove the file, just like what Velero does, whereas the command line seems to remove the file.


Copyright 2017-2024. All rights reserved.

Posted January 16, 2024 by pcm in category "bare-metal", "Kubernetes", "Raspberry PI