In my previous post Attaching a vSphere 7.0 with Tanzu supervisor cluster to Tanzu Mission Control and creating new Tanzu Kubernetes clusters, I walked through the process of registering a vSphere 7.0 with Tanzu supervisor cluster to Tanzu Mission Control for the purpose of creating and managing workload clusters. Since then, I’ve spent some more time with this and have been experimenting with upgrades as well as more command line options. In this post, I’ll again create a Tanzu Kubernetes cluster via the TMC UI and will also upgrade it. I’ll then dig into using the TMC command line utility, tmc
, to create and upgrade Tanzu Kubernetes clusters.
Table of Contents
Create a new cluster via the TMC UI
I’m starting off with a vSphere with Tanzu deployment which is using HAProxy for load balancing (instead of NSX-T) and which already has a single Tanzu Kubernetes cluster named tkg-cluster deployed. You can see the supervisor cluster and Tanzu Kubernetes cluster in the vSphere Client:

You can also see the Tanzu Kubernetes cluster by listing out the tanzukubernetescluster custom resource in the supervisor cluster (tkc for short):
kubectl get tkc
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cluster 1 1 v1.17.8+vmware.1-tkg.1.5417466 130d running
Since this cluster was created outside of TMC, I’m going to create a new cluster from TMC for the purposes of upgrading it via TMC. I have already registered my supervisor cluster to TMC (named deletens). Since much of the cluster creation process is the same as my earlier post, I’ll largely be posting screenshots of the process.




One thing I wanted to note this time around is that you should update your Content Library to be sure you have access to the latest list of Kubernetes versions. You can do this in the vSphere Client by right-clicking the appropriate content library and choosing Synchronize.


You can see the result of this by checking the available tanzukubernetesresource custom resources in the supervisor cluster (tkr for short). This is the before:
kubectl get tkr
NAME VERSION CREATED
v1.16.12---vmware.1-tkg.1.da7afe7 1.16.12+vmware.1-tkg.1.da7afe7 130d
v1.16.8---vmware.1-tkg.3.60d2ffd 1.16.8+vmware.1-tkg.3.60d2ffd 130d
v1.17.7---vmware.1-tkg.1.154236c 1.17.7+vmware.1-tkg.1.154236c 130d
v1.17.8---vmware.1-tkg.1.5417466 1.17.8+vmware.1-tkg.1.5417466 130d
And this is the after:
kubectl get tkr
NAME VERSION CREATED
v1.16.12---vmware.1-tkg.1.da7afe7 1.16.12+vmware.1-tkg.1.da7afe7 130d
v1.16.14---vmware.1-tkg.1.ada4837 1.16.14+vmware.1-tkg.1.ada4837 2m4s
v1.16.8---vmware.1-tkg.3.60d2ffd 1.16.8+vmware.1-tkg.3.60d2ffd 130d
v1.17.11---vmware.1-tkg.1.15f1e18 1.17.11+vmware.1-tkg.1.15f1e18 2m4s
v1.17.11---vmware.1-tkg.2.ad3d374 1.17.11+vmware.1-tkg.2.ad3d374 2m4s
v1.17.13---vmware.1-tkg.2.2c133ed 1.17.13+vmware.1-tkg.2.2c133ed 2m4s
v1.17.7---vmware.1-tkg.1.154236c 1.17.7+vmware.1-tkg.1.154236c 130d
v1.17.8---vmware.1-tkg.1.5417466 1.17.8+vmware.1-tkg.1.5417466 130d
v1.18.10---vmware.1-tkg.1.3a6cd48 1.18.10+vmware.1-tkg.1.3a6cd48 2m4s
v1.18.5---vmware.1-tkg.1.c40d30d 1.18.5+vmware.1-tkg.1.c40d30d 2m4s





kubectl get tkc
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cluster 1 1 v1.17.8+vmware.1-tkg.1.5417466 130d running
tkg-upgrade 1 1 v1.17.7+vmware.1-tkg.1.154236c 14m running
kubectl vsphere login --server=192.168.221.2 -u administrator@vsphere.local --tanzu-kubernetes-cluster-name tkg-upgrade --tanzu-kubernetes-cluster-namespace tkg
Password:
Logged in successfully.
You have access to the following contexts:
192.168.221.2
tkg
tkg-cluster
tkg-upgrade
If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.
To change context, use `kubectl config use-context <workload name>`
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tkg-upgrade-control-plane-rjbmj Ready master 13m v1.17.7+vmware.1
tkg-upgrade-workers-zk7jf-66694545c5-dgbbw Ready <none> 6m58s v1.17.7+vmware.1
You’ll notice on the new cluster’s page in TMC that the version value has an information dot next to it. This is indicating that an upgrade is available.
Upgrade a cluster via the TMC UI
With the cluster created and healthy, we can move on to upgrading it. You’ll find the Upgrade option under the Actions menu.

On the next page, you’ll be able to choose from available Kubernetes versions that are higher than the current version and present in the Content Library. I’m choosing 1.18.10 as it’s the highest available. Click the Upgrade button when ready to proceed.

While the control plane node is being upgrade the cluster status will be Unhealthy in TMC but you do get a notice that an upgrade is in progress:

Back in the vSphere Client we can see a new VM being deployed to replace the existing control plane node:

And we can see it showing up under the tkg-upgrade cluster:

From the supervisor cluster, we can see that the cluster version is already updated and that the cluster phase value is updating:
kubectl get tkc
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cluster 1 1 v1.17.8+vmware.1-tkg.1.5417466 130d running
tkg-upgrade 1 1 v1.18.10+vmware.1-tkg.1.3a6cd48 27m updating
And from within the cluster we can see that a new control plane node is being provisioned:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tkg-upgrade-control-plane-rjbmj Ready master 13m v1.17.7+vmware.1
tkg-upgrade-workers-zk7jf-66694545c5-dgbbw Ready <none> 6m58s v1.17.7+vmware.1
tkg-upgrade-control-plane-t5p46 NotReady <none> 0s v1.18.10+vmware.1
Once the new control plane node is functional, the old one is removed:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tkg-upgrade-control-plane-t5p46 Ready master 9m33s v1.18.10+vmware.1
tkg-upgrade-workers-zk7jf-66694545c5-dgbbw Ready <none> 21m v1.17.7+vmware.1
The same process repeats for the worker node. It’s worth noting that even after the new worker nodes was up and functional, the old one stuck around in a NotReady,SchedulingDisabled status for several minutes:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tkg-upgrade-control-plane-t5p46 Ready master 24m v1.18.10+vmware.1
tkg-upgrade-workers-zk7jf-6487d9d8b-fz849 Ready <none> 10m v1.18.10+vmware.1
tkg-upgrade-workers-zk7jf-66694545c5-dgbbw NotReady,SchedulingDisabled <none> 36m v1.17.7+vmware.1
Ultimately, the old node was removed and the cluster was fully ugpraded.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tkg-upgrade-control-plane-t5p46 Ready master 17m v1.18.10+vmware.1
tkg-upgrade-workers-zk7jf-6487d9d8b-fz849 Ready <none> 2m41s v1.18.10+vmware.1
And from the supervisor cluster we can see that the cluster phase is now running.
kubectl get tkc
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cluster 1 1 v1.17.8+vmware.1-tkg.1.5417466 130d running
tkg-upgrade 1 1 v1.18.10+vmware.1-tkg.1.3a6cd48 55m running
And back in the TMC UI, the Version is updated and the status is Healthy.

Create a new cluster using the tmc CLI
Doing the same thing from the command line was much easier that I would have expected and lends itself well to automation of your entire Tanzu Kubernetes environment.
I did need the UI up-front for validating the management cluster and provisioner names in TMC, as well as for generating an API token (see Generate API Tokens for more on this). You can see the management cluster and provisioner names from the Administration -> Management Cluster -> <cluster name> page:
With the needed information at hand, we can use the tmc login
command to get access to TMC and the vSphere with Tanzu supervisor cluster specifically:
tmc login
i If you don't have an API token, visit the VMware Cloud Services console, select your organization, and create an API token with the TMC service roles:
https://console.cloud.vmware.com/csp/gateway/portal/#/user/tokens
? API Token ****************************************************************
? Login context name newcluster
? Select default log level info
? Management Cluster Name deletens
? Provisioner Name tkg
√ Successfully created context newcluster, to manage your contexts run `tmc system context -h`
Now that we’re logged in, we can use the tmc cluster options list
command to obtain the available storage classes, virtual machine classes and virtual machine images:
tmc cluster options list
storageClasses:
- name: k8s-policy
virtualMachineClasses:
- cpuCores: "8"
memoryGb: "64"
name: best-effort-2xlarge
- cpuCores: "16"
memoryGb: "128"
name: best-effort-4xlarge
- cpuCores: "32"
memoryGb: "128"
name: best-effort-8xlarge
- cpuCores: "4"
memoryGb: "16"
name: best-effort-large
- cpuCores: "2"
memoryGb: "8"
name: best-effort-medium
- cpuCores: "2"
memoryGb: "4"
name: best-effort-small
- cpuCores: "4"
memoryGb: "32"
name: best-effort-xlarge
- cpuCores: "2"
memoryGb: "2"
name: best-effort-xsmall
- cpuCores: "8"
memoryGb: "64"
name: guaranteed-2xlarge
- cpuCores: "16"
memoryGb: "128"
name: guaranteed-4xlarge
- cpuCores: "32"
memoryGb: "128"
name: guaranteed-8xlarge
- cpuCores: "4"
memoryGb: "16"
name: guaranteed-large
- cpuCores: "2"
memoryGb: "8"
name: guaranteed-medium
- cpuCores: "2"
memoryGb: "4"
name: guaranteed-small
- cpuCores: "4"
memoryGb: "32"
name: guaranteed-xlarge
- cpuCores: "2"
memoryGb: "2"
name: guaranteed-xsmall
virtualMachineImages:
- name: v1.18.10+vmware.1-tkg.1.3a6cd48
- name: v1.18.5+vmware.1-tkg.1.c40d30d
- name: v1.17.13+vmware.1-tkg.2.2c133ed
- name: v1.17.11+vmware.1-tkg.2.ad3d374
- name: v1.17.11+vmware.1-tkg.1.15f1e18
- name: v1.17.8+vmware.1-tkg.1.5417466
- name: v1.17.7+vmware.1-tkg.1.154236c
- name: v1.16.14+vmware.1-tkg.1.ada4837
- name: v1.16.12+vmware.1-tkg.1.da7afe7
- name: v1.16.8+vmware.1-tkg.3.60d2ffd
It’s finally time to issue a tmc cluster create
command. You can run tmc cluster create -t tkgs –help to get a better idea of the available parameters. Many of them can be left at defaults but I wanted this cluster to look exactly like the one I created via the UI so specified nearly all of the values:
tmc cluster create -t tkgs --allowed-storage-classes k8s-policy --version v1.17.7+vmware.1-tkg.1.154236c --storage-class k8s-policy --instance-type best-effort-small --worker-instance-type best-effort-small --worker-node-count 1 --cluster-group clittle --name tkg-cli-upgrade --pods-cidr-blocks "172.20.0.0/16" --service-cidr-blocks "10.96.0.0/16"
i using template "tkgs"
√ cluster "tkg-cli-upgrade" is being created
We can see the exact same type of activity in the vSphere Client as we saw with the UI-based deployment:

And the cluster is already visible in the TMC UI:
From the usupervisor cluster, the new cluster is present and its phase is creating:
kubectl get tkc
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cli-upgrade 1 1 v1.17.7+vmware.1-tkg.1.154236c 6m31s creating
tkg-cluster 1 1 v1.17.8+vmware.1-tkg.1.5417466 130d running
And via the tmc CLI, we can see the new cluster alongside the original one:
tmc cluster list
NAME MANAGEMENTCLUSTER PROVISIONER LABELS
tkg-cli-upgrade deletens tkg tmc.cloud.vmware.com/creator:clittle
tkg-cluster deletens tkg tmc.cloud.vmware.com/creator:clittle
Once the new cluster is created, we’ll need to issue the kubectl vsphere login
command again, passing in the new Tanzu Kubernetes cluster name, so that we can get access to it:
kubectl vsphere login --server=192.168.221.2 -u administrator@vsphere.local --tanzu-kubernetes-cluster-name tkg-cli-upgrade --tanzu-kubernetes-cluster-namespace tkg
Password:
Logged in successfully.
You have access to the following contexts:
192.168.221.2
tkg
tkg-cli-upgrade
tkg-cluster
tkg-upgrade
If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.
To change context, use `kubectl config use-context <workload name>`
And from within the new cluster we can see nodes configuration specified during deployment:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tkg-cli-upgrade-control-plane-bg4mf Ready master 7m9s v1.17.7+vmware.1
tkg-cli-upgrade-workers-wtz79-5944c56776-x9r8k Ready <none> 29s v1.17.7+vmware.1
And from the supervisor cluster, the new cluster’s phase is now running:
kubectl get tkc
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cli-upgrade 1 1 v1.17.7+vmware.1-tkg.1.154236c 17m running
tkg-cluster 1 1 v1.17.8+vmware.1-tkg.1.5417466 130d running
The cluster looks great from the TMC UI as well:
Upgrade a cluster using the tmc CLI
In order to upgrade the new cluster, we’ll need to get some detailed information about it vai the tmc cluster get
command. This data will be used to construct a new cluster specification file that can be passed to the tmc cluster update -f
command in order to facilitate the upgrade.
tmc cluster get tkg-cli-upgrade
fullName:
managementClusterName: deletens
name: tkg-cli-upgrade
orgId: e6f8b4af-faa2-4b55-8403-97d2d6b19341
provisionerName: tkg
meta:
annotations:
authoritativeRID: rid:c:e6f8b4af-faa2-4b55-8403-97d2d6b19341:deletens:tkg:tkg-cli-upgrade
creationTime: "2021-02-01T20:01:09.692064Z"
labels:
tmc.cloud.vmware.com/creator: clittle
parentReferences:
- rid: rid:cg:e6f8b4af-faa2-4b55-8403-97d2d6b19341:clittle
uid: cg:01DSKSB7KB6X4TKBT3ZCJGF4WT
- rid: rid:prvn:e6f8b4af-faa2-4b55-8403-97d2d6b19341:deletens:tkg
uid: prvn:01EXFC49APKBAXK3NGT2PZPR6X
resourceVersion: 48842:573979
uid: c:01EXFJYHHWRSM88JKA5PT860SH
updateTime: "2021-02-01T20:21:36Z"
spec:
clusterGroupName: clittle
tkgServiceVsphere:
distribution:
version: v1.17.7+vmware.1-tkg.1.154236c
settings:
network:
pods:
cidrBlocks:
- 172.20.0.0/16
services:
cidrBlocks:
- 10.96.0.0/16
storage:
classes:
- k8s-policy
topology:
controlPlane:
class: best-effort-small
storageClass: k8s-policy
status:
allocatedCpu:
allocatable: 4000
allocatedPercentage: 72
requested: 2880
units: millicores
allocatedMemory:
allocatable: 7695
allocatedPercentage: 26
requested: 2042
units: mb
conditions:
Agent-READY:
message: cluster is connected to TMC and healthy
reason: 'phase: COMPLETE, health: HEALTHY'
severity: INFO
status: "TRUE"
type: READY
WCM-Ready:
severity: INFO
status: "TRUE"
type: Ready
WCM-VersionIsLatest:
message: Version v1.18.10+vmware.1-tkg.1.3a6cd48 is available for upgrade
severity: INFO
status: "FALSE"
type: VersionIsLatest
health: HEALTHY
healthDetails:
controllerManagerHealth:
health: HEALTHY
message: Healthy
name: controller-manager
etcdHealth:
- health: HEALTHY
message: Healthy
name: etcd-0
message: Cluster is healthy
schedulerHealth:
health: HEALTHY
message: Healthy
name: scheduler
timestamp: "2021-02-01T20:20:02.089798Z"
infrastructureProvider: VMWARE_VSPHERE
kubeServerVersion: v1.17.7+vmware.1
kubernetesProvider:
type: VMWARE_TANZU_KUBERNETES_GRID_SERVICE
nodeCount: "2"
phase: READY
type: PROVISIONED
type:
kind: Cluster
package: vmware.tanzu.manage.v1alpha1.cluster
version: v1alpha1
As you might imagine, we don’t need all of this in our specification file. The entire status:
section can be removed and the updateTime
value from the meta:
section can be removed. All that remains is to update the version
value in the spec:
-> distribution:
section and our specification file will look like the following:
fullName:
managementClusterName: deletens
name: tkg-cli-upgrade
orgId: e6f8b4af-faa2-4b55-8403-97d2d6b19341
provisionerName: tkg
meta:
annotations:
authoritativeRID: rid:c:e6f8b4af-faa2-4b55-8403-97d2d6b19341:deletens:tkg:tkg-cli-upgrade
creationTime: "2021-02-01T20:01:09.692064Z"
labels:
tmc.cloud.vmware.com/creator: clittle
parentReferences:
- rid: rid:cg:e6f8b4af-faa2-4b55-8403-97d2d6b19341:clittle
uid: cg:01DSKSB7KB6X4TKBT3ZCJGF4WT
- rid: rid:prvn:e6f8b4af-faa2-4b55-8403-97d2d6b19341:deletens:tkg
uid: prvn:01EXFC49APKBAXK3NGT2PZPR6X
resourceVersion: 48842:573979
uid: c:01EXFJYHHWRSM88JKA5PT860SH
spec:
clusterGroupName: clittle
tkgServiceVsphere:
distribution:
version: v1.18.10+vmware.1-tkg.1.3a6cd48
settings:
network:
pods:
cidrBlocks:
- 172.20.0.0/16
services:
cidrBlocks:
- 10.96.0.0/16
storage:
classes:
- k8s-policy
topology:
controlPlane:
class: best-effort-small
storageClass: k8s-policy
type:
kind: Cluster
package: vmware.tanzu.manage.v1alpha1.cluster
version: v1alpha1
With this file ready to go (named tmc-cli-upgrade.yaml
), we can use the tmc
command to kick off the upgrade:
tmc cluster update tkg-cli-upgrade -f tmc-cli-upgrade.yaml
√ cluster "tkg-cli-upgrade" updated successfully
And once again, we’ll see new nodes getting created in the vSphere Client to replace the old ones:

The phase of the cluster is updating:
kubectl get tkc
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cli-upgrade 1 1 v1.18.10+vmware.1-tkg.1.3a6cd48 37m updating
tkg-cluster 1 1 v1.17.8+vmware.1-tkg.1.5417466 130d running
The version is already updated and the phase is UPGRADING when checked with the tmc
command:
tmc cluster get tkg-cli-upgrade |egrep "version|phase" |egrep -v "message|alpha|reason"
version: v1.18.10+vmware.1-tkg.1.3a6cd48
phase: UPGRADING
And once the cluster is updated we can see that the nodes are at the 1.18.10 version:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tkg-cli-upgrade-control-plane-5rw6z Ready master 16m v1.18.10+vmware.1
tkg-cli-upgrade-workers-wtz79-6cfdf59c57-krb9s Ready <none> 2m51s v1.18.10+vmware.1
The cluster version is updated and the cluster’s phase is now running
:
kubectl get tkc
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cli-upgrade 1 1 v1.18.10+vmware.1-tkg.1.3a6cd48 59m running
tkg-cluster 1 1 v1.17.8+vmware.1-tkg.1.5417466 130d running
The tmc command shows the cluster’s phase as READY:
tmc cluster get tkg-cli-upgrade |egrep "version|phase" |egrep -v "message|alpha|reason"
version: v1.18.10+vmware.1-tkg.1.3a6cd48
phase: READY
There are loads of things you can do with the tmc
command and I highly recommend reading up on the options at VMware Tanzu Mission Control CLI. Time permitting, I’ll have more posts about some of the other ways to take advantage of this powerful tool.