In my previous post, Registering a vSphere 8 with Tanzu Supervisor Cluster with Tanzu Mission Control, I detailed the steps needed to register a vSphere 8 with Tanzu supervisor cluster with Tanzu Mission Control (TMC). Once this step is done, you can use TMC to manage existing Tanzu Kubernetes clusters (TKCs) created under the supervisor cluster.
My last post ended with the following screenshot:

You can see from this view in TMC that there is an existing TKc named tkg2-cluster-1 and that it’s Managed status is No. This simply means that TMC currently has no insights into this cluster and can perform no operations against it.
With the tkg2-cluster-1 cluster selected, click the Manage 1 Cluster button.

In my previous post, I set the default cluster group for managed clusters to clittle. That setting is only applicable to newly created clusters so I chose it manually (its a mandatory setting) while managing this clsuter. You can also specify a local registry for the needed Tanzu Kubernetes Release (TKR) images if you’re in an air-gapped environment and your supervisor cluster does not have internet access. You can also specify a proxy configuration just for this cluster.
Click the Manage button.

Note that the Managed status for the cluster is now Yes.
Switch your kubectl context
to tkg2-cluster-1.
kubectl config use-context tkg2-cluster-1
Switched to context "tkg2-cluster-1".
You should see a recently created namespace named vmware-system-tmc.
kubectl get ns vmware-system-tmc
NAME STATUS AGE
vmware-system-tmc Active 79s
If you check the contents of this namespace you’ll see that many resources are in the process of being created.
kubectl -n vmware-system-tmc get all
NAME READY STATUS RESTARTS AGE
pod/agent-updater-9765678d5-z5hk5 1/1 Running 0 62s
pod/agentupdater-workload-28140036-cv8r4 0/1 Completed 0 20s
pod/cluster-auth-pinniped-6d7cc67bbc-8z5mv 0/1 Init:0/1 0 11s
pod/cluster-auth-pinniped-6d7cc67bbc-vn9rf 0/1 Init:0/1 0 11s
pod/cluster-health-extension-7bc7bdc5c4-6bnq9 0/1 ContainerCreating 0 3s
pod/extension-manager-8bcf6f777-k4rpd 1/1 Running 0 59s
pod/extension-updater-6d4df9bcd5-mcvpc 1/1 Running 0 61s
pod/policy-insight-extension-manager-58dd6c5667-v5fdc 1/1 Running 0 18s
pod/policy-sync-extension-f59d494cb-jjbzd 0/1 ContainerCreating 0 13s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cluster-auth-pinniped-api ClusterIP 10.102.87.1 <none> 443/TCP 10s
service/cluster-auth-pinniped-proxy ClusterIP 10.98.115.40 <none> 443/TCP 10s
service/extension-manager-service ClusterIP 10.105.21.251 <none> 443/TCP 59s
service/extension-updater ClusterIP 10.106.223.213 <none> 9988/TCP 62s
service/inspection-extension ClusterIP 10.108.185.125 <none> 443/TCP 1s
service/policy-insight-extension-service ClusterIP 10.99.223.14 <none> 443/TCP 18s
service/policy-sync-extension ClusterIP 10.107.105.139 <none> 443/TCP 13s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/agent-updater 1/1 1 1 62s
deployment.apps/cluster-auth-pinniped 0/2 2 0 11s
deployment.apps/cluster-health-extension 0/1 1 0 3s
deployment.apps/extension-manager 1/1 1 1 59s
deployment.apps/extension-updater 1/1 1 1 61s
deployment.apps/inspection-extension 0/1 0 0 0s
deployment.apps/policy-insight-extension-manager 1/1 1 0 18s
deployment.apps/policy-sync-extension 0/1 1 0 13s
NAME DESIRED CURRENT READY AGE
replicaset.apps/agent-updater-9765678d5 1 1 1 62s
replicaset.apps/cluster-auth-pinniped-6d7cc67bbc 2 2 0 11s
replicaset.apps/cluster-health-extension-7bc7bdc5c4 1 1 0 3s
replicaset.apps/extension-manager-8bcf6f777 1 1 1 59s
replicaset.apps/extension-updater-6d4df9bcd5 1 1 1 61s
replicaset.apps/inspection-extension-7bc4b5655f 1 1 0 0s
replicaset.apps/policy-insight-extension-manager-58dd6c5667 1 1 1 18s
replicaset.apps/policy-sync-extension-f59d494cb 1 1 0 13s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/agentupdater-workload */1 * * * * False 1 20s 62s
NAME COMPLETIONS DURATION AGE
job.batch/agentupdater-workload-28140036 0/1 20s 20s
In the TMC UI, navigate to Clusters and click on the cluster name. You will see some details are starting to come in for the cluster while the various agents and extensions are being deployed.

Eventually, the cluster Overview page should look similar to the following.

All resources in the vmware-system-tmc namespace should be in a running or completed state.
kubectl -n vmware-system-tmc get all
NAME READY STATUS RESTARTS AGE
pod/agent-updater-9765678d5-z5hk5 1/1 Running 0 3m38s
pod/agentupdater-workload-28140038-mxmrd 0/1 Completed 0 56s
pod/cluster-auth-pinniped-6d7cc67bbc-8z5mv 1/1 Running 0 2m47s
pod/cluster-auth-pinniped-6d7cc67bbc-vn9rf 1/1 Running 0 2m47s
pod/cluster-auth-pinniped-kube-cert-agent-7fbf54f49f-9pwf2 1/1 Running 0 119s
pod/cluster-health-extension-7bc7bdc5c4-6bnq9 1/1 Running 0 2m39s
pod/cluster-secret-6c4dbd9896-6tmr8 1/1 Running 0 2m30s
pod/extension-manager-8bcf6f777-k4rpd 1/1 Running 0 3m35s
pod/extension-updater-6d4df9bcd5-mcvpc 1/1 Running 0 3m37s
pod/gatekeeper-operator-manager-85f9bc7cc5-zsz5l 1/1 Running 0 2m35s
pod/inspection-extension-7bc4b5655f-5ztp5 1/1 Running 0 2m36s
pod/intent-agent-6c45b94768-d7cxh 1/1 Running 0 2m32s
pod/logs-collector-cluster-auth-pinniped-20230703163654-4rfbg 0/1 Completed 0 2m2s
pod/logs-collector-cluster-health-extension-20230703163654-5bkzr 0/1 Completed 0 2m2s
pod/logs-collector-cluster-secret-20230703163654-jj8cb 0/1 Completed 0 2m2s
pod/logs-collector-extension-manager-20230703163654-m2mv9 0/1 Completed 0 2m2s
pod/logs-collector-gatekeeper-operator-20230703163653-94pmp 0/1 Completed 0 2m2s
pod/logs-collector-inspection-20230703163654-86x77 0/1 Completed 0 2m2s
pod/logs-collector-package-deployment-20230703163654-9kxp4 0/1 Completed 0 2m2s
pod/logs-collector-policy-insight-extension-20230703163654-rblm7 0/1 Completed 0 2m2s
pod/logs-collector-policy-sync-extension-20230703163654-xl2vd 0/1 Completed 0 2m2s
pod/logs-collector-tmc-observer-20230703163654-wvkp8 0/1 Completed 0 2m2s
pod/package-deployment-84d64984b5-9vc88 1/1 Running 0 2m31s
pod/policy-insight-extension-manager-58dd6c5667-v5fdc 1/1 Running 0 2m54s
pod/policy-sync-extension-f59d494cb-jjbzd 1/1 Running 0 2m49s
pod/sync-agent-6886f657bd-8p55b 1/1 Running 0 2m33s
pod/tmc-observer-5c947d89c4-gddm7 1/1 Running 0 2m28s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cluster-auth-pinniped-api ClusterIP 10.102.87.1 <none> 443/TCP 2m47s
service/cluster-auth-pinniped-proxy ClusterIP 10.98.115.40 <none> 443/TCP 2m47s
service/extension-manager-service ClusterIP 10.105.21.251 <none> 443/TCP 3m36s
service/extension-updater ClusterIP 10.106.223.213 <none> 9988/TCP 3m39s
service/gatekeeper-operator-service ClusterIP 10.98.5.6 <none> 443/TCP 2m36s
service/inspection-extension ClusterIP 10.108.185.125 <none> 443/TCP 2m38s
service/policy-insight-extension-service ClusterIP 10.99.223.14 <none> 443/TCP 2m55s
service/policy-sync-extension ClusterIP 10.107.105.139 <none> 443/TCP 2m50s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/agent-updater 1/1 1 1 3m39s
deployment.apps/cluster-auth-pinniped 2/2 2 2 2m48s
deployment.apps/cluster-auth-pinniped-kube-cert-agent 1/1 1 1 2m
deployment.apps/cluster-health-extension 1/1 1 1 2m40s
deployment.apps/cluster-secret 1/1 1 1 2m31s
deployment.apps/extension-manager 1/1 1 1 3m36s
deployment.apps/extension-updater 1/1 1 1 3m38s
deployment.apps/gatekeeper-operator-manager 1/1 1 1 2m36s
deployment.apps/inspection-extension 1/1 1 1 2m37s
deployment.apps/intent-agent 1/1 1 1 2m33s
deployment.apps/package-deployment 1/1 1 1 2m32s
deployment.apps/policy-insight-extension-manager 1/1 1 1 2m55s
deployment.apps/policy-sync-extension 1/1 1 1 2m50s
deployment.apps/sync-agent 1/1 1 1 2m34s
deployment.apps/tmc-observer 1/1 1 1 2m30s
NAME DESIRED CURRENT READY AGE
replicaset.apps/agent-updater-9765678d5 1 1 1 3m39s
replicaset.apps/cluster-auth-pinniped-6d7cc67bbc 2 2 2 2m48s
replicaset.apps/cluster-auth-pinniped-kube-cert-agent-7fbf54f49f 1 1 1 2m
replicaset.apps/cluster-health-extension-7bc7bdc5c4 1 1 1 2m40s
replicaset.apps/cluster-secret-6c4dbd9896 1 1 1 2m31s
replicaset.apps/extension-manager-8bcf6f777 1 1 1 3m36s
replicaset.apps/extension-updater-6d4df9bcd5 1 1 1 3m38s
replicaset.apps/gatekeeper-operator-manager-85f9bc7cc5 1 1 1 2m36s
replicaset.apps/inspection-extension-7bc4b5655f 1 1 1 2m37s
replicaset.apps/intent-agent-6c45b94768 1 1 1 2m33s
replicaset.apps/package-deployment-84d64984b5 1 1 1 2m32s
replicaset.apps/policy-insight-extension-manager-58dd6c5667 1 1 1 2m55s
replicaset.apps/policy-sync-extension-f59d494cb 1 1 1 2m50s
replicaset.apps/sync-agent-6886f657bd 1 1 1 2m34s
replicaset.apps/tmc-observer-5c947d89c4 1 1 1 2m29s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/agentupdater-workload */1 * * * * False 0 57s 3m39s
NAME COMPLETIONS DURATION AGE
job.batch/agentupdater-workload-28140038 1/1 15s 57s
job.batch/logs-collector-cluster-auth-pinniped-20230703163654 1/1 36s 2m3s
job.batch/logs-collector-cluster-health-extension-20230703163654 1/1 39s 2m3s
job.batch/logs-collector-cluster-secret-20230703163654 1/1 37s 2m3s
job.batch/logs-collector-extension-manager-20230703163654 1/1 39s 2m3s
job.batch/logs-collector-gatekeeper-operator-20230703163653 1/1 52s 2m3s
job.batch/logs-collector-inspection-20230703163654 1/1 38s 2m3s
job.batch/logs-collector-package-deployment-20230703163654 1/1 31s 2m3s
job.batch/logs-collector-policy-insight-extension-20230703163654 1/1 36s 2m3s
job.batch/logs-collector-policy-sync-extension-20230703163654 1/1 97s 2m3s
job.batch/logs-collector-tmc-observer-20230703163654 1/1 31s 2m3s
You can use the various tabs at the top of the cluster page to see more details on the cluster’s resources.




You can also drill down further into individual components to see more detail. For example, the following screenshot shows the specifics about the control plane node in this cluster.

And you can drill further down into any child resources. The following shows the details for one of the Antrea pods running on the control plane node.

One thing that is arguably easier to do in TMC than via CLI/YAML is to scale a cluster. You need to start at the Node Pools tab for the cluster where you can see the number of worker nodes and their sizing are currently configured.

You can see that my node pool is set to have 2 nodes at a best-effort-medium size. I’m going to leave the size alone but increase the number of nodes to 3.

After you save this change, you should see that the status of the node pool changes to Updating.

In vCenter Server, you will almost immediately see activity around a new node being created.

Within the cluster, you can see that there are still only two worker nodes.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tkg2-cluster-1-qztgr-vs6dt Ready control-plane 80d v1.24.9+vmware.1
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-7cmw4 Ready <none> 80d v1.24.9+vmware.1
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-wz5z8 Ready <none> 80d v1.24.9+vmware.1
But if you switch to the supervisor namespace that contains the cluster, you can see that a new machine object is being created.
kubectl config use-context tkg2-cluster-namespace
Switched to context "tkg2-cluster-namespace".
kubectl get machine
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
tkg2-cluster-1-qztgr-vs6dt tkg2-cluster-1 tkg2-cluster-1-qztgr-vs6dt vsphere://421de056-f45b-3d6c-d501-70eefd9e35df Running 80d v1.24.9+vmware.1
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-7cmw4 tkg2-cluster-1 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-7cmw4 vsphere://421d5260-3dbb-96e8-2a51-85f0107948e6 Running 80d v1.24.9+vmware.1
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-kmvvx tkg2-cluster-1 Provisioning 86s v1.24.9+vmware.1
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-wz5z8 tkg2-cluster-1 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-wz5z8 vsphere://421d120c-0ad6-d38d-bde3-cb26102614d9 Running 80d v1.24.9+vmware.1
Eventually, you will see a third worker node powered on under the tkg2-cluster-1 object in the vCenter Server inventory.

And you will see that the new machine object is in a Running state.
kubectl get machine
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
tkg2-cluster-1-qztgr-vs6dt tkg2-cluster-1 tkg2-cluster-1-qztgr-vs6dt vsphere://421de056-f45b-3d6c-d501-70eefd9e35df Running 80d v1.24.9+vmware.1
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-7cmw4 tkg2-cluster-1 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-7cmw4 vsphere://421d5260-3dbb-96e8-2a51-85f0107948e6 Running 80d v1.24.9+vmware.1
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-kmvvx tkg2-cluster-1 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-kmvvx vsphere://421dc4aa-d951-9853-9697-9b0385d99ee2 Running 5m41s v1.24.9+vmware.1
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-wz5z8 tkg2-cluster-1 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-wz5z8 vsphere://421d120c-0ad6-d38d-bde3-cb26102614d9 Running 80d v1.24.9+vmware.1
The cluster itself will show three worker nodes in a Ready state.
kubectl config use-context tkg2-cluster-1
Switched to context "tkg2-cluster-1".
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tkg2-cluster-1-qztgr-vs6dt Ready control-plane 80d v1.24.9+vmware.1
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-7cmw4 Ready <none> 80d v1.24.9+vmware.1
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-kmvvx Ready <none> 69s v1.24.9+vmware.1
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-ttkg5-7c546557c6-wz5z8 Ready <none> 80d v1.24.9+vmware.1
If you move over to the Nodes tab in TMC, you should also see the new worker node there as well.

It has been a while since I’ve used TMC and I was very excited to see that you can now use it to see all of the Tanzu packages that are installed to your cluster. You simply navigate to the Add-ons page and then to Installed Tanzu Packages.

If you read my earlier post, Installing Packages to a TKG cluster in vSphere 8 with Tanzu, you’ll know that many of these (cert-manager, contour, external-dns, fluent-bit, grafana, prometheus). were installed manually to provide specific services to the cluster. If you are less inclined to do things from the command line, you can use TMC to deploy these packages. You can click the Browse Packages button to see what is available.

For example, if you want to install cert-manager, you click on the Install Package link under it and are presented with the following page:

With cert-manager being one of the simpler packages to install, you would really only need to supply a name for the package and then click the Install Package button. More complex packages that require customizations to work properly would take a little more effort of course.