I did a walkthrough of this process for vSphere 7 with Tanzu in my earlier post, Attaching a vSphere 7.0 with Tanzu supervisor cluster to Tanzu Mission Control and creating new Tanzu Kubernetes clusters, and the process is largely the same for vSphere 8 with Tanzu. Once you have your supervisor cluster registered as a management cluster in TMC, you can use the graphical UI to provision new Tanzu Kubernetes clusters, enable services, monitor the usage/performance of your Tanzu installations, and much more. I highly recommend this a first step for easing the management of larger/multiple Tanzu deployments.
Create the Registration URL
You have to start out at https://console.cloud.vmware.com. You should see VMware Tanzu Mission Control, and likely other services.

Click the Launch Service link in the VMware Tanzu Mission Control tile.
Navigate to Administration > Management clusters.

Click the Register Management Cluster dropdown.

Select vSphere with Tanzu.
Provide a valid name for the management cluster and set a cluster group if desired. The cluster group you select here will determine the default cluster group used for placement of Tanzu Kubernetes clusters created under this management cluster. You can always select a different cluster group when creating a TKC but you cannot change the cluster group for the management cluster.

Click the Next button.

You can see that I left the proxy information blank as my lab doesn’t use one but you can create one via the Administration > Proxy configurations page.


I created a dummy proxy configuration named test. Then if I wanted to use a proxy I would be able to set it.


Back to my actual deployment (after clearing the proxy configuration data and clicking Next), you should be on the Register page and have a registration URL present.

Copy the registration URL and paste it somewhere so you can use it later. Click the View Management Cluster button.

Note that the status is currently Unknown. This will change as the process moves along.
Create the agentInstall resource in the vSphere with Tanzu supervisor cluster
On a system with access to the supervisor cluster, switch your context to the supervisor cluster.
kubectl config use-context wcp.corp.vmw
Switched to context "wcp.corp.vmw".
A TMC namespace exists by default. Determine the name of the TMC namespace via the following command.
kubectl get ns |egrep 'NAME|tmc'
NAME STATUS AGE
svc-tmc-c8 Active 54d
Note: You can see from the output that the namespace is svc-tmc-c8. The “c3” portion corresponds to the MOID of the vSphere cluster where workload management is enabled.
Create a file named tmc-registration.yaml
with content similar to the following:
apiVersion: installers.tmc.cloud.vmware.com/v1alpha1
kind: AgentInstall
metadata:
name: tmc-agent-installer-config
namespace: svc-tmc-c8
spec:
operation: INSTALL
registrationLink: https://epsg.tmc.cloud.vmware.com/installer?id=85d357b62f7cf3c09fcbeb57aa4738812249dfff668431501ae7a4989afd94d2&source=registration&type=tkgs
Note: Be sure to replace the namespace
and registrationLink
values with ones appropriate for your installation.
You can check to see that there are very few resources present in the svc-tmc-c8 namespace prior to creating the agentInstall object.
kubectl -n svc-tmc-c8 get all
NAME READY STATUS RESTARTS AGE
pod/tmc-agent-installer-28140020-w8ncw 0/1 Completed 0 34s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/tmc-agent-installer */1 * * * * False 0 34s 54d
NAME COMPLETIONS DURATION AGE
job.batch/tmc-agent-installer-28140020 1/1 10s 34s
kubectl -n svc-tmc-c8 get agentinstalls
No resources found in svc-tmc-c8 namespace.
Create the agentInstall resource.
kubectl create -f tmc-registration.yaml
agentinstall.installers.tmc.cloud.vmware.com/tmc-agent-installer-config created
If you check the svc-tmc-c8 namespace for resources again, you’ll see that some are being created.
kubectl -n svc-tmc-c8 get all
NAME READY STATUS RESTARTS AGE
pod/tmc-agent-installer-28140022-m698d 0/1 Completed 0 10s
pod/tmc-bootstrapper-qd86h 0/1 ContainerCreating 0 3s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/tmc-agent-installer */1 * * * * False 1 11s 54d
NAME COMPLETIONS DURATION AGE
job.batch/tmc-agent-installer-28140022 0/1 11s 11s
job.batch/tmc-bootstrapper 0/1 4s 4s
kubectl -n svc-tmc-c8 get agentinstalls
NAME AGE
agentinstall.installers.tmc.cloud.vmware.com/tmc-agent-installer-config 17s
After a few minutes you’ll see many more resources present.
kubectl -n svc-tmc-c8 get all
NAME READY STATUS RESTARTS AGE
pod/agent-updater-59cc4d5785-jf54m 1/1 Running 0 5m32s
pod/agentupdater-workload-28140028-sgctn 0/1 Completed 0 51s
pod/cluster-health-extension-66b7fcb8bd-zjdth 1/1 Running 0 3m59s
pod/extension-manager-6f5696f5cb-hgqkf 1/1 Running 0 5m32s
pod/extension-updater-8dd5984db-t6mbp 1/1 Running 0 5m35s
pod/intent-agent-5c754b54bd-zptpm 1/1 Running 0 4m5s
pod/sync-agent-6f6d76cf98-d5kh4 1/1 Running 0 4m1s
pod/tmc-agent-installer-28140028-rkrv7 0/1 Completed 0 51s
pod/tmc-auto-attach-546c7c6465-m8j2n 1/1 Running 0 4m
pod/vsphere-resource-retriever-6bc4cbb559-2s7gj 1/1 Running 0 4m3s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/extension-manager-service ClusterIP 10.96.1.154 <none> 443/TCP 5m33s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/agent-updater 1/1 1 1 5m32s
deployment.apps/cluster-health-extension 1/1 1 1 4m
deployment.apps/extension-manager 1/1 1 1 5m32s
deployment.apps/extension-updater 1/1 1 1 5m35s
deployment.apps/intent-agent 1/1 1 1 4m5s
deployment.apps/sync-agent 1/1 1 1 4m2s
deployment.apps/tmc-auto-attach 1/1 1 1 4m1s
deployment.apps/vsphere-resource-retriever 1/1 1 1 4m3s
NAME DESIRED CURRENT READY AGE
replicaset.apps/agent-updater-59cc4d5785 1 1 1 5m32s
replicaset.apps/cluster-health-extension-66b7fcb8bd 1 1 1 3m59s
replicaset.apps/extension-manager-6f5696f5cb 1 1 1 5m32s
replicaset.apps/extension-updater-8dd5984db 1 1 1 5m35s
replicaset.apps/intent-agent-5c754b54bd 1 1 1 4m5s
replicaset.apps/sync-agent-6f6d76cf98 1 1 1 4m2s
replicaset.apps/tmc-auto-attach-546c7c6465 1 1 1 4m1s
replicaset.apps/vsphere-resource-retriever-6bc4cbb559 1 1 1 4m3s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/agentupdater-workload */1 * * * * False 0 51s 5m32s
cronjob.batch/tmc-agent-installer */1 * * * * False 0 51s 54d
NAME COMPLETIONS DURATION AGE
job.batch/agentupdater-workload-28140028 1/1 18s 51s
job.batch/tmc-agent-installer-28140028 1/1 11s 51s
And back in the TMC UI, you should see that the supervisor cluster is attached at a TMC management cluster and in a Healthy state.

If you click on the Workload clusters tab at the bottom, you’ll see that there is a TKG cluster already provisioned.

I’ll go over bringing this cluster into TMC management in a future (short) post.