In my earlier post, Enabling and Using the Velero Service on a vSphere 8 with Tanzu Supervisor Cluster, I walked through the process of getting the Velero service running in a vSphere 8 with Tanzu supervisor cluster. This process largely leveraged the Velero operator to ensure that all components were created properly and the Velero Data Manager to perform stateful backups/restored. These same components can be used with Velero in a TKG cluster managed by the same supervisor cluster.
You can refer to my other earlier post, Creating a Tanzu Kubernetes cluster in vSphere 8 with Tanzu, for specifics on my TKG cluster.
Switch to the TKG cluster context:
kubectl config use-context tkg2-cluster-1
Create a new namespace in the TKG cluster named velero.
kubectl create ns velero
You need to create a ConfigMap definition that will tell the Velero plugin that we’re working with a TKG cluster.
apiVersion: v1
kind: ConfigMap
metadata:
name: velero-vsphere-plugin-config
data:
cluster_flavor: GUEST
kubectl -n velero apply -f tkgs-velero-vsphere-plugin-config.yaml
If you haven’t already done so download and extract the velero
cli from https://github.com/vmware-tanzu/velero/releases/download/v1.9.2/velero-v1.9.2-linux-amd64.tar.gz.
You can use the velero
command to install the Velero components to the velero namespace in the TKG cluster. This is essentially the same process as was used in the Supervisor cluster.
velero install --provider aws --bucket velero --secret-file /home/ubuntu/Velero/s3-credentials --features=EnableVSphereItemActionPlugin --cacert /usr/local/share/ca-certificates/controlcenter.crt --plugins velero/velero-plugin-for-aws:v1.6.1,vsphereveleroplugin/velero-plugin-for-vsphere:v1.4.2 --snapshot-location-config region=minio --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://192.168.110.60:9000
CustomResourceDefinition/backups.velero.io: attempting to create resource
CustomResourceDefinition/backups.velero.io: attempting to create resource client
CustomResourceDefinition/backups.velero.io: created
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client
CustomResourceDefinition/backupstoragelocations.velero.io: created
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client
CustomResourceDefinition/deletebackuprequests.velero.io: created
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client
CustomResourceDefinition/downloadrequests.velero.io: created
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client
CustomResourceDefinition/podvolumebackups.velero.io: created
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client
CustomResourceDefinition/podvolumerestores.velero.io: created
CustomResourceDefinition/resticrepositories.velero.io: attempting to create resource
CustomResourceDefinition/resticrepositories.velero.io: attempting to create resource client
CustomResourceDefinition/resticrepositories.velero.io: created
CustomResourceDefinition/restores.velero.io: attempting to create resource
CustomResourceDefinition/restores.velero.io: attempting to create resource client
CustomResourceDefinition/restores.velero.io: created
CustomResourceDefinition/schedules.velero.io: attempting to create resource
CustomResourceDefinition/schedules.velero.io: attempting to create resource client
CustomResourceDefinition/schedules.velero.io: created
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client
CustomResourceDefinition/serverstatusrequests.velero.io: created
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client
CustomResourceDefinition/volumesnapshotlocations.velero.io: created
Waiting for resources to be ready in cluster...
Namespace/velero: attempting to create resource
Namespace/velero: attempting to create resource client
Namespace/velero: already exists, proceeding
Namespace/velero: created
ClusterRoleBinding/velero: attempting to create resource
ClusterRoleBinding/velero: attempting to create resource client
ClusterRoleBinding/velero: created
ServiceAccount/velero: attempting to create resource
ServiceAccount/velero: attempting to create resource client
ServiceAccount/velero: created
Secret/cloud-credentials: attempting to create resource
Secret/cloud-credentials: attempting to create resource client
Secret/cloud-credentials: created
BackupStorageLocation/default: attempting to create resource
BackupStorageLocation/default: attempting to create resource client
BackupStorageLocation/default: created
VolumeSnapshotLocation/default: attempting to create resource
VolumeSnapshotLocation/default: attempting to create resource client
VolumeSnapshotLocation/default: created
Deployment/velero: attempting to create resource
Deployment/velero: attempting to create resource client
Deployment/velero: created
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status.
You can make sure that the operator is doing its job by looking at the logs for the velero-vsphere-operator container, kubectl -n svc-velero-vsphere-domain-c1006 logs velero-vsphere-operator-7f5bf5d8f6-9qqn2
(while in the supervisor cluster context). You can also watch the events in the velero namespace (of the TKG cluster), kubectl -n velero get events
.
Unlike with deploying Velero with velero-vsphere
in the Supervisor cluster, the velero
deployment to the TKG cluster has no issues with taints/tolerations and everything will come up as expected.
To test the backup, you can deploy the nginx (with-pv.yaml
) supplied with the Velero download.
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx-example
labels:
app: nginx
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-logs
namespace: nginx-example
labels:
app: nginx
spec:
# Optional:
storageClassName: k8s-policy
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx-example
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
pre.hook.backup.velero.io/container: fsfreeze
pre.hook.backup.velero.io/command: '["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]'
post.hook.backup.velero.io/container: fsfreeze
post.hook.backup.velero.io/command: '["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]'
spec:
volumes:
- name: nginx-logs
persistentVolumeClaim:
claimName: nginx-logs
containers:
- image: nginx:1.17.6
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-logs
readOnly: false
- image: ubuntu:bionic
name: fsfreeze
securityContext:
privileged: true
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-logs
readOnly: false
command:
- "/bin/bash"
- "-c"
- "sleep infinity"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: my-nginx
namespace: nginx-example
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
When this was deployed to the Supervisor cluster, the namespace
stanza had to be removed since namespaces cannot be created via kubectl there. It can be left in place in the TKG cluster and the ngxinx deployment will end up in a new ngxinx-example namespace.
kubectl apply -f with-pv.yaml
namespace/nginx-example created
persistentvolumeclaim/nginx-logs created
deployment.apps/nginx-deployment created
service/my-nginx created
kubectl -n nginx-example get po,pvc,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-6779884c68-htqj6 2/2 Running 0 5m10s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/nginx-logs Bound pvc-94cc3fe6-a8a1-451d-a3b7-bd10c24cb517 50Mi RWO k8s-policy 4m56s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-nginx LoadBalancer 10.98.168.115 10.40.14.70 80:30200/TCP 5m20s
This looks nearly identical to what was seen when this was deployed to the Supervisor cluster and the same object types were created in vSphere and NSX.
Backing this up will be a similar process to what was done in the Supervisor cluster.
velero backup create nginx-tkgs --include-namespaces nginx-example
Backup request "nginx-tkgs" submitted successfully.
Run `velero backup describe nginx-tkgs` or `velero backup logs nginx-tkgs` for more details.
I did not specify the resource types since this is a standard namespace and Velero will have no issues backing up all resource types (unlike in a Supervisor namespace).
You should see the same tasks in vCenter that were observed during the last backup.

And you can see that there is a new backup listed:
velero backup get
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
nginx Completed 0 0 2023-04-10 09:12:47 -0700 PDT 27d default <none>
nginx-tkgs Completed 0 0 2023-04-12 10:55:30 -0700 PDT 29d default <none>
And in MinIO, there are new folders created corresponding to this new backup.


The restore process is the same as noted in the Delete a Backed-Up Stateful Workload and Restore it with Velero section of my earlier post, Enabling and Using the Velero Service on a vSphere 8 with Tanzu Supervisor Cluster.