In my earlier post, Supervisor Cluster Namespaces and Workloads, you saw that you can run workloads in the supervisor cluster but there are some limitations. The supervisor cluster is very opinionated and you might find it difficult to get anything/everything you want to work there. With this in mind, you might find it more useful to create a Tanzu Kubernetes cluster (TKC) under your supervisor cluster. A TKC will function as if it were a generic Kubernetes cluster but is instantiated and maintained by ClusterAPI resources running in the supervisor cluster.
Create a Supervisor Namespace
To create your first Tanzu Kubernetes Cluster (TKC), you will need to create a new namespace for it. This is the same as was done previously. You will also need to assign privileges to at least one user and assign a storage policy.

You can see that this namespace will be “tkg2-cluster-namespace”.

administrator@vsphere.local and vmwadmins@corpv.mw (group) have been given the owner role in the namespace.

The k8s-policy storage policy is assigned to the namespace.
You will also need to add one or more VM Classes and a Content Library.
Workload Management > Namespaces > tkg2-cluster-namespace > Summary > VM Service > Add VM Class
Select all, click the OK button

Workload Management > Namespaces > tkg2-cluster-namespace > Summary > VM Service > Add Content Library
Select Kubernetes Service Content Library, click the OK button.


The namespace now shows as having available VM Classes and a content library assigned.
Inspect the namespace and related objects
You should see several new objects created in NSX:
Tier-1 Gateway:

Segment:

Load Balancer:

IP Address Pool and IP Block:


NAT Rules:

You should be able to re-login (via kubectl vsphere login
) and see a new context corresponding to the new namespace:
kubectl vsphere login -u vmwadmin@corp.vmw --server=wcp.corp.vmw
Logged in successfully.
You have access to the following contexts:
test
tkg2-cluster-namespace
velero
wcp.corp.vmw
If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.
To change context, use `kubectl config use-context <workload name>`
Switch to the new context and then check to make sure that you have the needed VM Class(es) and Tanzu Kubernetes Release(s) (TKRs) for creating a TKC.
kubectl config use-context tkg2-cluster-namespace
Switched to context "tkg2-cluster-namespace".
kubectl get virtualmachineclassbindings
NAME AGE
best-effort-2xlarge 13m
best-effort-4xlarge 13m
best-effort-8xlarge 13m
best-effort-large 13m
best-effort-medium 13m
best-effort-small 13m
best-effort-xlarge 13m
best-effort-xsmall 13m
guaranteed-2xlarge 13m
guaranteed-4xlarge 13m
guaranteed-8xlarge 13m
guaranteed-large 13m
guaranteed-medium 13m
guaranteed-small 13m
guaranteed-xlarge 13m
guaranteed-xsmall 13m
kubectl get tkr
NAME VERSION READY COMPATIBLE CREATED
v1.16.12---vmware.1-tkg.1.da7afe7 v1.16.12+vmware.1-tkg.1.da7afe7 False False 14d
v1.16.14---vmware.1-tkg.1.ada4837 v1.16.14+vmware.1-tkg.1.ada4837 False False 14d
v1.16.8---vmware.1-tkg.3.60d2ffd v1.16.8+vmware.1-tkg.3.60d2ffd False False 14d
v1.17.11---vmware.1-tkg.1.15f1e18 v1.17.11+vmware.1-tkg.1.15f1e18 False False 14d
v1.17.11---vmware.1-tkg.2.ad3d374 v1.17.11+vmware.1-tkg.2.ad3d374 False False 14d
v1.17.13---vmware.1-tkg.2.2c133ed v1.17.13+vmware.1-tkg.2.2c133ed False False 14d
v1.17.17---vmware.1-tkg.1.d44d45a v1.17.17+vmware.1-tkg.1.d44d45a False False 14d
v1.17.7---vmware.1-tkg.1.154236c v1.17.7+vmware.1-tkg.1.154236c False False 14d
v1.17.8---vmware.1-tkg.1.5417466 v1.17.8+vmware.1-tkg.1.5417466 False False 14d
v1.18.10---vmware.1-tkg.1.3a6cd48 v1.18.10+vmware.1-tkg.1.3a6cd48 False False 14d
v1.18.15---vmware.1-tkg.1.600e412 v1.18.15+vmware.1-tkg.1.600e412 False False 14d
v1.18.15---vmware.1-tkg.2.ebf6117 v1.18.15+vmware.1-tkg.2.ebf6117 False False 14d
v1.18.19---vmware.1-tkg.1.17af790 v1.18.19+vmware.1-tkg.1.17af790 False False 14d
v1.18.5---vmware.1-tkg.1.c40d30d v1.18.5+vmware.1-tkg.1.c40d30d False False 14d
v1.19.11---vmware.1-tkg.1.9d9b236 v1.19.11+vmware.1-tkg.1.9d9b236 False False 14d
v1.19.14---vmware.1-tkg.1.8753786 v1.19.14+vmware.1-tkg.1.8753786 False False 14d
v1.19.16---vmware.1-tkg.1.df910e2 v1.19.16+vmware.1-tkg.1.df910e2 False False 14d
v1.19.7---vmware.1-tkg.1.fc82c41 v1.19.7+vmware.1-tkg.1.fc82c41 False False 14d
v1.19.7---vmware.1-tkg.2.f52f85a v1.19.7+vmware.1-tkg.2.f52f85a False False 14d
v1.20.12---vmware.1-tkg.1.b9a42f3 v1.20.12+vmware.1-tkg.1.b9a42f3 False False 14d
v1.20.2---vmware.1-tkg.1.1d4f79a v1.20.2+vmware.1-tkg.1.1d4f79a False False 14d
v1.20.2---vmware.1-tkg.2.3e10706 v1.20.2+vmware.1-tkg.2.3e10706 False False 14d
v1.20.7---vmware.1-tkg.1.7fb9067 v1.20.7+vmware.1-tkg.1.7fb9067 False False 14d
v1.20.8---vmware.1-tkg.2 v1.20.8+vmware.1-tkg.2 False False 14d
v1.20.9---vmware.1-tkg.1.a4cee5b v1.20.9+vmware.1-tkg.1.a4cee5b False False 14d
v1.21.2---vmware.1-tkg.1.ee25d55 v1.21.2+vmware.1-tkg.1.ee25d55 True True 14d
v1.21.6---vmware.1-tkg.1 v1.21.6+vmware.1-tkg.1 True True 14d
v1.21.6---vmware.1-tkg.1.b3d708a v1.21.6+vmware.1-tkg.1.b3d708a True True 14d
v1.22.9---vmware.1-tkg.1.cc71bc8 v1.22.9+vmware.1-tkg.1.cc71bc8 True True 14d
v1.23.8---vmware.2-tkg.2-zshippable v1.23.8+vmware.2-tkg.2-zshippable True True 14d
v1.23.8---vmware.3-tkg.1 v1.23.8+vmware.3-tkg.1 True True 14d
These TKRs not only determine the Kubernetes version that will be deployed but the node OS, Ubuntu or Photon OS. You can take a closer look at any of them to see the details.
kubectl get tkr v1.23.8---vmware.2-tkg.2-zshippable -o jsonpath='{"Node OS: "}{.metadata.labels.os-name}{"\n"}{"K8s Version: "}{.spec.kubernetes.version}{"\n"}'
Node OS: ubuntu
K8s Version: v1.23.8+vmware.2
kubectl get tkr v1.23.8---vmware.3-tkg.1 -o jsonpath='{"Node OS: "}{.metadata.labels.os-name}{"\n"}{"K8s Version: "}{.spec.kubernetes.version}{"\n"}'
Node OS: photon
K8s Version: v1.23.8+vmware.3
Or if you want a quick one-liner to see what all of the various TKRs are:
kubectl get tkr -Ao jsonpath='{range .items[*]}{@.metadata.name}{"\n"}{"Node OS: "}{@.metadata.labels.os-name}{"\n"}{"K8s Version: "}{@.spec.kubernetes.version}{"\n\n"}{end}'
v1.16.12---vmware.1-tkg.1.da7afe7
Node OS: photon
K8s Version: v1.16.12+vmware.1
v1.16.14---vmware.1-tkg.1.ada4837
Node OS: photon
K8s Version: v1.16.14+vmware.1
v1.16.8---vmware.1-tkg.3.60d2ffd
Node OS: photon
K8s Version: v1.16.8+vmware.1
v1.17.11---vmware.1-tkg.1.15f1e18
Node OS: photon
K8s Version: v1.17.11+vmware.1
v1.17.11---vmware.1-tkg.2.ad3d374
Node OS: photon
K8s Version: v1.17.11+vmware.1
v1.17.13---vmware.1-tkg.2.2c133ed
Node OS: photon
K8s Version: v1.17.13+vmware.1
v1.17.17---vmware.1-tkg.1.d44d45a
Node OS: photon
K8s Version: v1.17.17+vmware.1
v1.17.7---vmware.1-tkg.1.154236c
Node OS: photon
K8s Version: v1.17.7+vmware.1
v1.17.8---vmware.1-tkg.1.5417466
Node OS: photon
K8s Version: v1.17.8+vmware.1
v1.18.10---vmware.1-tkg.1.3a6cd48
Node OS: photon
K8s Version: v1.18.10+vmware.1
v1.18.15---vmware.1-tkg.1.600e412
Node OS: photon
K8s Version: v1.18.15+vmware.1
v1.18.15---vmware.1-tkg.2.ebf6117
Node OS: photon
K8s Version: v1.18.15+vmware.1
v1.18.19---vmware.1-tkg.1.17af790
Node OS: photon
K8s Version: v1.18.19+vmware.1
v1.18.5---vmware.1-tkg.1.c40d30d
Node OS: photon
K8s Version: v1.18.5+vmware.1
v1.19.11---vmware.1-tkg.1.9d9b236
Node OS: photon
K8s Version: v1.19.11+vmware.1
v1.19.14---vmware.1-tkg.1.8753786
Node OS: photon
K8s Version: v1.19.14+vmware.1
v1.19.16---vmware.1-tkg.1.df910e2
Node OS: photon
K8s Version: v1.19.16+vmware.1
v1.19.7---vmware.1-tkg.1.fc82c41
Node OS: photon
K8s Version: v1.19.7+vmware.1
v1.19.7---vmware.1-tkg.2.f52f85a
Node OS: photon
K8s Version: v1.19.7+vmware.1
v1.20.12---vmware.1-tkg.1.b9a42f3
Node OS: photon
K8s Version: v1.20.12+vmware.1
v1.20.2---vmware.1-tkg.1.1d4f79a
Node OS: photon
K8s Version: v1.20.2+vmware.1
v1.20.2---vmware.1-tkg.2.3e10706
Node OS: photon
K8s Version: v1.20.2+vmware.1
v1.20.7---vmware.1-tkg.1.7fb9067
Node OS: photon
K8s Version: v1.20.7+vmware.1
v1.20.8---vmware.1-tkg.2
Node OS: ubuntu
K8s Version: v1.20.8+vmware.1
v1.20.9---vmware.1-tkg.1.a4cee5b
Node OS: photon
K8s Version: v1.20.9+vmware.1
v1.21.2---vmware.1-tkg.1.ee25d55
Node OS: photon
K8s Version: v1.21.2+vmware.1
v1.21.6---vmware.1-tkg.1
Node OS: ubuntu
K8s Version: v1.21.6+vmware.1
v1.21.6---vmware.1-tkg.1.b3d708a
Node OS: photon
K8s Version: v1.21.6+vmware.1
v1.22.9---vmware.1-tkg.1
Node OS: ubuntu
K8s Version: v1.22.9+vmware.1
v1.22.9---vmware.1-tkg.1.cc71bc8
Node OS: photon
K8s Version: v1.22.9+vmware.1
v1.23.8---vmware.2-tkg.2-zshippable
Node OS: ubuntu
K8s Version: v1.23.8+vmware.2
v1.23.8---vmware.3-tkg.1
Node OS: photon
K8s Version: v1.23.8+vmware.3
You can also take a closer look at the namespace itself and see some of the NSX/NCP and vSphere information:
kubectl describe namespace tkg2-cluster-namespace
Name: tkg2-cluster-namespace
Labels: kubernetes.io/metadata.name=tkg2-cluster-namespace
vSphereClusterID=domain-c1006
Annotations: ls_id-0: 11d464bb-a138-461f-adbe-498a5334e2db
ncp/extpoolid: domain-c1006:aa896728-b4f0-407f-b25b-45eaff3f1f44-ippool-10-40-14-129-10-40-14-190
ncp/router_id: t1_a5ed6fd7-8b00-42d6-b08e-c3013a8dc452_rtr
ncp/snat_ip: 10.40.14.133
ncp/subnet-0: 10.244.0.80/28
vmware-system-resource-pool: resgroup-10026
vmware-system-vm-folder: group-v10027
Status: Active
Resource Quotas
Name: tkg2-cluster-namespace-storagequota
Resource Used Hard
-------- --- ---
k8s-policy.storageclass.storage.k8s.io/requests.storage 0 9223372036854775807
No LimitRange resource.
vSphereClusterID=domain-c1006
– This is the MOID for the RegionA01-Compute cluster. You can see this if you select the cluster and look in the URL:

ncp/extpoolid: domain-c1006:aa896728-b4f0-407f-b25b-45eaff3f1f44-ippool-10-40-14-129-10-40-14-190
– This is the NSX IP Address Pool used for egress traffic.

ncp/router_id: t1_a5ed6fd7-8b00-42d6-b08e-c3013a8dc452_rtr
– This is the ID for the Tier-1 Gateway created for this namespace. You can see this value in the list of tags on the Tier-1 Gateway.

ncp/snat_ip: 10.40.14.133
– This is the IP address taken from the egress pool that will be used by all outgoing traffic from the namespace. You can also see this IP address used in the NSX NAT rules.
ncp/subnet-0: 10.244.0.80/28
– This is the range of IP addresses that will be used for vSphere Pods or TKC Nodes in the namespace.
vmware-system-resource-pool: resgroup-10026
– This is the MOID for the tkg2-cluster-namespace namespace. Similar to the RegionaA01-Compute cluster, you can see this value by clicking on the namespace and looking at the URL.

vmware-system-vm-folder: group-v10027
– This is the MOID of the tkgs-cluster-namespace folder in the VMs and Templates vies. Unfortunately, you can’t see this in vSphere UI but you can see it in the vCenter MOB (https://<VCSA FQDN/IP>/mob)

Create a TKC definition
Like most things in Kubernetes, creating a TKC is a declarative process. A cluster specification file must be created that will define how the cluster will be created.
The following is an example of a very basic TKC specification:
apiVersion: run.tanzu.vmware.com/v1alpha3
kind: TanzuKubernetesCluster
metadata:
name: tkg2-cluster-1
namespace: tkg2-cluster-namespace
spec:
topology:
controlPlane:
replicas: 1
vmClass: best-effort-small
storageClass: k8s-policy
tkr:
reference:
name: v1.23.8---vmware.2-tkg.2-zshippable
nodePools:
- name: tkg2-cluster-1-nodepool-1
replicas: 2
vmClass: best-effort-medium
storageClass: k8s-policy
The main takeaways from this are:
metatdata:name
– an arbitrary (DNS-compliant) name for the clusternamespace
– the supervisor namespace where the cluster will be created- r
eplicas
(controlPlane or nodePools) – the number of nodes to create vmClass
(controlPlane or nodePools) – the compute for the nodes being created
Note: You see specifics of the various VM Classes in the vSphere Client at Workload Management, Services, Manage (on the VM Services tile), VM Classes.storageClass
– the storage class to be used within the supervisor namespace for creating the nodes
Note: If you’re not sure what to use here, you can runkubectl get sc
from the supervisor namespace context to see what’s available.- t
kr
– this will determine what kubernetes version you’ll be deploying in your TKC, this example will be 1.23.8 and use Ubuntu Linux nodePools:name
– an abritrary (DNS-compliant) name for the logical grouping of worker nodes
You can see many more example TKC definitions at Using the TanzuKubernetesCluster v1alpha3 API. You can specify unique networking and storage parameters or make other customizations.
Create the Tanzu Kubernetes Cluster
With a definition file created, you can use kubectl
to create the cluster in the supervisor namespace
kubectl apply -f tkg2-cluster-1.yaml
tanzukubernetescluster.run.tanzu.vmware.com/tkg2-cluster-1 created
You can watch the events in the supervisor namespace to see what is happening and if there are any issues. It’s very normal to see some errors near the beginning as various checks are being run before the cluster is fully instantiated.
kubectl get events -w
LAST SEEN TYPE REASON OBJECT MESSAGE
0s Normal Issuing certificate/tkg2-cluster-1-extensions-ca Issuing certificate as Secret does not exist
0s Normal Generated certificate/tkg2-cluster-1-extensions-ca Stored new private key in temporary Secret resource "tkg2-cluster-1-extensions-ca-jt4mw"
0s Normal Requested certificate/tkg2-cluster-1-extensions-ca Created new CertificateRequest resource "tkg2-cluster-1-extensions-ca-2sjvt"
0s Normal cert-manager.io certificaterequest/tkg2-cluster-1-extensions-ca-2sjvt Certificate request has been approved by cert-manager.io
0s Normal CertificateIssued certificaterequest/tkg2-cluster-1-extensions-ca-2sjvt Certificate fetched from issuer successfully
0s Normal Issuing certificate/tkg2-cluster-1-extensions-ca The certificate has been successfully issued
0s Normal KeyPairVerified issuer/tkg2-cluster-1-extensions-ca-issuer Signing CA verified
0s Normal KeyPairVerified issuer/tkg2-cluster-1-extensions-ca-issuer Signing CA verified
0s Normal Issuing certificate/tkg2-cluster-1-auth-svc-cert Issuing certificate as Secret does not exist
0s Normal Generated certificate/tkg2-cluster-1-auth-svc-cert Stored new private key in temporary Secret resource "tkg2-cluster-1-auth-svc-cert-x5tgc"
0s Normal Requested certificate/tkg2-cluster-1-auth-svc-cert Created new CertificateRequest resource "tkg2-cluster-1-auth-svc-cert-jbzqt"
0s Normal cert-manager.io certificaterequest/tkg2-cluster-1-auth-svc-cert-jbzqt Certificate request has been approved by cert-manager.io
0s Normal CertificateIssued certificaterequest/tkg2-cluster-1-auth-svc-cert-jbzqt Certificate fetched from issuer successfully
0s Normal TopologyCreate cluster/tkg2-cluster-1 Created "VSphereCluster/tkg2-cluster-1-nfsdl"
0s Normal TopologyCreate cluster/tkg2-cluster-1 Created "VSphereMachineTemplate/tkg2-cluster-1-control-plane-j9gpd"
0s Normal Issuing certificate/tkg2-cluster-1-auth-svc-cert The certificate has been successfully issued
0s Normal TopologyCreate cluster/tkg2-cluster-1 Created "KubeadmControlPlane/tkg2-cluster-1-7qk5w"
0s Normal TopologyCreate machinehealthcheck/tkg2-cluster-1-7qk5w Created "/tkg2-cluster-1-7qk5w"
0s Warning ReconcileError machinehealthcheck/tkg2-cluster-1-7qk5w error creating client and cache for remote cluster: error fetching REST client config for remote cluster "tkg2-cluster-namespace/tkg2-cluster-1": failed to retrieve kubeconfig secret for Cluster tkg2-cluster-namespace/tkg2-cluster-1: secrets "tkg2-cluster-1-kubeconfig" not found
0s Warning ReconcileError machinehealthcheck/tkg2-cluster-1-7qk5w error creating client and cache for remote cluster: error fetching REST client config for remote cluster "tkg2-cluster-namespace/tkg2-cluster-1": failed to retrieve kubeconfig secret for Cluster tkg2-cluster-namespace/tkg2-cluster-1: secrets "tkg2-cluster-1-kubeconfig" not found
0s Warning ReconcileError machinehealthcheck/tkg2-cluster-1-7qk5w error creating client and cache for remote cluster: error fetching REST client config for remote cluster "tkg2-cluster-namespace/tkg2-cluster-1": failed to retrieve kubeconfig secret for Cluster tkg2-cluster-namespace/tkg2-cluster-1: secrets "tkg2-cluster-1-kubeconfig" not found
In vCenter, you should see a new resource pool created and the control plane node being deployed.


Even before the cluster is fully created, you can take a look at it’s configuration.
kubectl get cluster
NAME PHASE AGE VERSION
tkg2-cluster-1 Provisioned 3m48s v1.23.8+vmware.2
kubectl describe cluster tkg2-cluster-1
kubectl describe cluster tkg2-cluster-1
Name: tkg2-cluster-1
Namespace: tkg2-cluster-namespace
Labels: cluster.x-k8s.io/cluster-name=tkg2-cluster-1
run.tanzu.vmware.com/tkr=v1.23.8---vmware.2-tkg.2-zshippable
topology.cluster.x-k8s.io/owned=
Annotations: run.tanzu.vmware.com/resolve-tkr: !run.tanzu.vmware.com/legacy-tkr
tkg.tanzu.vmware.com/skip-tls-verify:
tkg.tanzu.vmware.com/tkg-http-proxy:
tkg.tanzu.vmware.com/tkg-https-proxy:
tkg.tanzu.vmware.com/tkg-ip-family:
tkg.tanzu.vmware.com/tkg-no-proxy:
tkg.tanzu.vmware.com/tkg-proxy-ca-cert:
API Version: cluster.x-k8s.io/v1beta1
Kind: Cluster
Metadata:
Creation Timestamp: 2023-04-04T16:57:55Z
Finalizers:
cluster.cluster.x-k8s.io
Generation: 6
Managed Fields:
API Version: cluster.x-k8s.io/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:failureDomains:
.:
f:domain-c1006:
.:
f:controlPlane:
f:infrastructureReady:
f:observedGeneration:
f:phase:
Manager: manager
Operation: Update
Subresource: status
Time: 2023-04-04T16:58:05Z
API Version: cluster.x-k8s.io/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:run.tanzu.vmware.com/resolve-tkr:
f:tkg.tanzu.vmware.com/skip-tls-verify:
f:tkg.tanzu.vmware.com/tkg-http-proxy:
f:tkg.tanzu.vmware.com/tkg-https-proxy:
f:tkg.tanzu.vmware.com/tkg-ip-family:
f:tkg.tanzu.vmware.com/tkg-no-proxy:
f:tkg.tanzu.vmware.com/tkg-proxy-ca-cert:
f:finalizers:
.:
v:"cluster.cluster.x-k8s.io":
f:labels:
f:cluster.x-k8s.io/cluster-name:
f:topology.cluster.x-k8s.io/owned:
f:ownerReferences:
.:
k:{"uid":"97ece89e-3f82-4d19-b915-2acf46d7f0c4"}:
f:spec:
.:
f:clusterNetwork:
.:
f:pods:
.:
f:cidrBlocks:
f:serviceDomain:
f:services:
.:
f:cidrBlocks:
f:controlPlaneEndpoint:
.:
f:host:
f:port:
f:controlPlaneRef:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:infrastructureRef:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:topology:
.:
f:class:
f:controlPlane:
.:
f:metadata:
f:replicas:
f:variables:
f:version:
f:workers:
.:
f:machineDeployments:
Manager: manager
Operation: Update
Time: 2023-04-04T16:58:08Z
Owner References:
API Version: run.tanzu.vmware.com/v1alpha3
Block Owner Deletion: true
Controller: true
Kind: TanzuKubernetesCluster
Name: tkg2-cluster-1
UID: 97ece89e-3f82-4d19-b915-2acf46d7f0c4
Resource Version: 5653246
UID: dfcd16cd-50f8-45ec-a89d-906a13ae02e8
Spec:
Cluster Network:
Pods:
Cidr Blocks:
192.168.0.0/16
Service Domain: cluster.local
Services:
Cidr Blocks:
10.96.0.0/12
Control Plane Endpoint:
Host: 10.40.14.69
Port: 6443
Control Plane Ref:
API Version: controlplane.cluster.x-k8s.io/v1beta1
Kind: KubeadmControlPlane
Name: tkg2-cluster-1-7qk5w
Namespace: tkg2-cluster-namespace
Infrastructure Ref:
API Version: vmware.infrastructure.cluster.x-k8s.io/v1beta1
Kind: VSphereCluster
Name: tkg2-cluster-1-nfsdl
Namespace: tkg2-cluster-namespace
Topology:
Class: tanzukubernetescluster
Control Plane:
Metadata:
Replicas: 1
Variables:
Name: nodePoolVolumes
Value:
Name: TKR_DATA
Value:
v1.23.8+vmware.2:
Kubernetes Spec:
Coredns:
Image Tag: v1.8.6_vmware.7
Etcd:
Image Tag: v3.5.4_vmware.6
Image Repository: localhost:5000/vmware.io
Pause:
Image Tag: 3.6
Version: v1.23.8+vmware.2
Labels:
Image - Type: vmi
Os - Arch: amd64
Os - Name: photon
Os - Type: linux
Os - Version: 3
run.tanzu.vmware.com/os-image: ob-20611023-photon-3-amd64-vmi-k8s-v1.23.8---vmware.2-tkg.1-zshippable
run.tanzu.vmware.com/tkr: v1.23.8---vmware.2-tkg.2-zshippable
Vmi - Name: ob-20611023-photon-3-amd64-vmi-k8s-v1.23.8---vmware.2-tkg.1-zshippable
Os Image Ref:
Name: ob-20611023-photon-3-amd64-vmi-k8s-v1.23.8---vmware.2-tkg.1-zshippable
Name: storageClasses
Value:
k8s-policy
Name: ntp
Value: 192.168.100.1
Name: storageClass
Value: k8s-policy
Name: trust
Value:
Additional Trusted C As:
Name: vmware-harbor-669319617-ssl
Name: extensionCert
Value:
Content Secret:
Key: tls.crt
Name: tkg2-cluster-1-extensions-ca
Name: clusterEncryptionConfigYaml
Value: LS0tCmFwaVZlcnNpb246IGFwaXNlcnZlci5jb25maWcuazhzLmlvL3YxCmtpbmQ6IEVuY3J5cHRpb25Db25maWd1cmF0aW9uCnJlc291cmNlczoKICAtIHJlc291cmNlczoKICAgIC0gc2VjcmV0cwogICAgcHJvdmlkZXJzOgogICAgLSBhZXNjYmM6CiAgICAgICAga2V5czoKICAgICAgICAtIG5hbWU6IGtleTEKICAgICAgICAgIHNlY3JldDogWnZuN3Q5bUZjbTNzRUFGL1ZnS2JtbmJhcGFhaGo5N1dtTE83N29hR3ptbz0KICAgIC0gaWRlbnRpdHk6IHt9Cg==
Name: nodePoolLabels
Value:
Name: defaultRegistrySecret
Value:
Data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUhLRENDQlJDZ0F3SUJBZ0lUSWdBQUFCYnNoeVRQaTY0YlpRQUFBQUFBRmpBTkJna3Foa2lHOXcwQkFRc0YKQURCTU1STXdFUVlLQ1pJbWlaUHlMR1FCR1JZRGRtMTNNUlF3RWdZS0NaSW1pWlB5TEdRQkdSWUVZMjl5Y0RFZgpNQjBHQTFVRUF4TVdZMjl1ZEhKdmJHTmxiblJsY2k1amIzSndMblp0ZHpBZUZ3MHlNekF5TVRZeE56UTFNek5hCkZ3MHpNekF5TVRNeE56UTFNek5hTUZReEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUlFd3BEWVd4cFptOXkKYm1saE1SSXdFQVlEVlFRSEV3bFFZV3h2SUVGc2RHOHhEekFOQmdOVkJBb1RCbFpOZDJGeVpURUxNQWtHQTFVRQpBeE1DUTBFd2dnR2lNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJqd0F3Z2dHS0FvSUJnUUNHM2ptcFRwc0tkcmlpCi82OUN0UGpOczNoaVczdkhqL1dFRWw5OGR5VUNCL0lxZXE2ZVdIMk9uaHRUSHlVOXVLT1hXbFphUEo2VkRUOXIKVUszMlhDRTYzVmpNSFhBTWVDekJnY21GVEtncG1CeG5sQzEwYmx5RkFoeWwycXF0UTdJU0FYYTdzRElEQkNQTgpCd1JUenQxeE4yQ0gvSkpHYnlhNFZKVy9oazdWNEc4a09RL3JRYUpWbGloNVFVbStEQWUwdFgweW1hVms4d2JyCk56STZnaTAyYzkxczdFcUx3OGlnWkRpbmNzMStKMEY2bW1ydXpSbUh4c2s0NHhJaStiUUFCVVFRbzJBM29OcVMKeVM0UWZZdGFUcnRHQmFwcHZoMDNibHkzcVFGWHBWY3hyZ3N5UytsUGNaaEt0V1U4L2RRWFZOUnJxVllJQlB6egpiYlMzUWpPSk5nYVovbkp3WGNReXNnNWVnVzMwS1NjMXVld2UxMXlUZ0RpUkx4dzc2WC9zYldlVnFoVmdzWFdCCk03b0NONlpCOHphNEFRSGg4dXJtRTNmcUNKM3U1akpuY3p3QUI5YkZnNnRzKzVIZFBZVVU1WFNXazJZVW1pTUYKY1l3SzliYkhMZzVsZmFERkhSLzAwaVpINy9IUXFITW5aZUN6TlhGZG1ZYUpOYy9QeGU4Q0F3RUFBYU9DQW5rdwpnZ0oxTUIwR0ExVWREZ1FXQkJUdW1vb3IvYklVZFBGUFpKMm1iZVhDMGFpSU9EQXlCZ05WSFJFRUt6QXBnUTVsCmJXRnBiRUJoWTIxbExtTnZiWWNFd0todUZvSVJkbU56WVMwd01XRXVZMjl5Y0M1MmJYY3dId1lEVlIwakJCZ3cKRm9BVXhvUHpsbEFKYWVkbE1xMldqWVdCNEY4a0djY3dnZGNHQTFVZEh3U0J6ekNCekRDQnlhQ0J4cUNCdzRhQgp3R3hrWVhBNkx5OHZRMDQ5WTI5dWRISnZiR05sYm5SbGNpNWpiM0p3TG5adGR5eERUajFqYjI1MGNtOXNZMlZ1CmRHVnlMRU5PUFVORVVDeERUajFRZFdKc2FXTWxNakJMWlhrbE1qQlRaWEoyYVdObGN5eERUajFUWlhKMmFXTmwKY3l4RFRqMURiMjVtYVdkMWNtRjBhVzl1TEVSRFBXTnZjbkFzUkVNOWRtMTNQMk5sY25ScFptbGpZWFJsVW1WMgpiMk5oZEdsdmJreHBjM1EvWW1GelpUOXZZbXBsWTNSRGJHRnpjejFqVWt4RWFYTjBjbWxpZFhScGIyNVFiMmx1CmREQ0J4UVlJS3dZQkJRVUhBUUVFZ2Jnd2diVXdnYklHQ0NzR0FRVUZCekFDaG9HbGJHUmhjRG92THk5RFRqMWoKYjI1MGNtOXNZMlZ1ZEdWeUxtTnZjbkF1ZG0xM0xFTk9QVUZKUVN4RFRqMVFkV0pzYVdNbE1qQkxaWGtsTWpCVApaWEoyYVdObGN5eERUajFUWlhKMmFXTmxjeXhEVGoxRGIyNW1hV2QxY21GMGFXOXVMRVJEUFdOdmNuQXNSRU05CmRtMTNQMk5CUTJWeWRHbG1hV05oZEdVL1ltRnpaVDl2WW1wbFkzUkRiR0Z6Y3oxalpYSjBhV1pwWTJGMGFXOXUKUVhWMGFHOXlhWFI1TUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNRHdHQ1NzRwpBUVFCZ2pjVkJ3UXZNQzBHSlNzR0FRUUJnamNWQ0lIMmhsV0Jtc3drZ2IyUEk0R2ErVCtVdlZjOGc1ZkVib2ZCCnlCUUNBV1FDQVFJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dJQkFOSVRlM2d3UUxWREZDd3lkdmZrSkJ4cWFCMUsKSHpBaXFUQm1hUjZKdXhJU1JwSnAzbE11ZTZQcFBucVZVVEhEZW9FdUlGcldlRm1ha0tvR2VFaUlxaytIZFMvMApvRXVzaVBNakkzN2c0UVlXMDlaSllXelU1OFZ2dms1d3J1aGczcDV0amQrSStpUnM4b2N1ZUFmQ08wSGE3djdlCjg4MG1QSS9XSnc4Nzk3MWQrRENiZzRDMXNvdjNaQ3kvYWlEWWJXdFVVQWlXLzdESXZhclk1cDhoNlNIcXdRancKczdGOWhPbTEybXdybEhHNi9LL1pGU2JqdlFOQlNEQkZIUys0ZHJOdFhWYXNqN3lhV2FRckc2a0JWWE93RTRMQwpqRytYVEdXemQvUEhjWDlwU3J1Z1gvd1ErcG5aQU5va2FhNUZtT1pwbFJkdWEwYkNPOWthdk5MQ3lGakxobEhlCkhia0RHZmxISVExOUZtL3NaWmFkL1VqZHJKajFJSEJTSFJuTjRIdXVMUDFZbnp4L0J5eWF1T2xlMlU0Vm9tMmYKOVpibU5DMkpWTk1Pc2hWam05VTh0QUFEbXJkWmEvdlRNOThlNXNtN010SzExTzdXSDF2NU1xdkpUUitMb0VzaQpyVXlncjFRNUhtRmh4a3dTTFJ2OEIzQ2NRTFB6N1Q0aXdmL0pLeGZPZEc3bTVQa3B2SEhkV0JDYU4wL0UrVG9tCmNaM3BUTFpPSWlEV3VPNWkwTTNNVERIQU04anFFUG0zWTZQd0xncHlrcEJIa3JMak05ajBnUzlhUllrcnBnZGMKMzJqWUdmMFRtakYwUHJLRk92aDNPMTJIYUtqWlpYNklRNk9jYnRGVHgweHJ6eDI1YldsQzBhMDZlWnFrQTVsUgo3TmpVQUo1ZUZ3REZxOUo4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZjekNDQTF1Z0F3SUJBZ0lRVFlKSVRRM1NaNEJCUzlVelhmSkl1VEFOQmdrcWhraUc5dzBCQVFzRkFEQk0KTVJNd0VRWUtDWkltaVpQeUxHUUJHUllEZG0xM01SUXdFZ1lLQ1pJbWlaUHlMR1FCR1JZRVkyOXljREVmTUIwRwpBMVVFQXhNV1kyOXVkSEp2YkdObGJuUmxjaTVqYjNKd0xuWnRkekFlRncweU1qQXpNakV4T1RFM01qaGFGdzB6Ck56QXpNakV4T1RJM01qTmFNRXd4RXpBUkJnb0praWFKay9Jc1pBRVpGZ04yYlhjeEZEQVNCZ29Ka2lhSmsvSXMKWkFFWkZnUmpiM0p3TVI4d0hRWURWUVFERXhaamIyNTBjbTlzWTJWdWRHVnlMbU52Y25BdWRtMTNNSUlDSWpBTgpCZ2txaGtpRzl3MEJBUUVGQUFPQ0FnOEFNSUlDQ2dLQ0FnRUEyT1lLeGNrT2poZ3VmV3UxWUVuYXR2SjFNMTI3Cmd3UGJGTmoxMS9kSUNYYVBlK21qTjFIY2UwUGlTMlFhYWVBZThrSCttT0tSYTJKamFHZFhyNnJPaUI4MEtaT1IKdXcwR3pTSnlMNXc3ZXdSK05KZjMxWU82MkJEL210M3NIZU1uQ1htU0J4T1F2YjBuR2toVHIxeStyRHB2eEo4Nwp6TmN6Z2ZONTR0bzZTMzc5d2pPc0M0YmtITG5NSjVFdEpHNzhwUHFYMSsxd2NWT1VSTko2eTlCY2VqTG5veS95CkNGcFhLT1Z4S0h6eTJubnNpdEF1QmIraEQrSnh3OC9qRlFVaHhIMFZsZ3lmWENRZGVnYXNTQTlSSHRadGZwVnMKaHNoaXNqa1NsdlFtYnNFa25CWnJBZkJWSVlpZHd0M3cwNTBqVmhpVXM1UWw2dkRvdFk2R3F0enpncTBvYnY2UAo3RTlOUGVqM0J6aFBTSVV5cW5wZjU3VVdJNHpVaVJKdmJTdS9KMk1DQktId1lmemtlMWNudkxBN3ZpREVkQjkrCi9IdGs5YUc5LzFCNmRkRGZhZnJjU09XdGtUZkhXWUx2MjFvM1V3b2g5VzVPcEs5SmlrWnUvUHFucFprVWkrMkMKTCtXQ3d3L0JTMXloUXdWaWY2UHFVTWVTTHozanRxM3c2Ui9ydVVNbE8rMEU1Ly9ic2tEVDZRR3hCZ2N2TUY5bgpEbCt1MHVxSEtPZGlVdk9YQnRGMTM5SEtVclpzcTBtM1dQb2VsMi9wK2NWVkpZc3lKRy9yUnBlaDFnL1gwY0IzCjlFdVRqWDZ2bnJUK0lTOFpmQWFvSHpwbWdoMXZHdTJyMnhnUHEyRTh4NGppOUZHVjhZVGpBczYwTnc3WXhLVVcKV2dqK1lOcHhQMlN4RnFVQ0F3RUFBYU5STUU4d0N3WURWUjBQQkFRREFnR0dNQThHQTFVZEV3RUIvd1FGTUFNQgpBZjh3SFFZRFZSME9CQllFRk1hRDg1WlFDV25uWlRLdGxvMkZnZUJmSkJuSE1CQUdDU3NHQVFRQmdqY1ZBUVFECkFnRUFNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUNBUUF1dFh3T3RzbVljYmovYnMzTXlkeDBEaTltKzZVVlRFWmQKT1JSclR1cy9CTC9UTnJ5Tzd6bzJiZWN6R1BLMjZNd3FobVVaaWFGNjFqUmIzNmt4bUZQVngydVYybnA0TGJRago1TXJ4dFB6ZjJYWHk0YjdBRHFRcExndTRyUjNtWmlYR216VW9WMTdobUFoeWZTVTFxbTRGc3NYR0syeXBXc1FzCkJ3c0tYNERzSWlqSkpaYlh3S0ZhYXVxMEx0bmtnZUdXZG9FRkZXQUgweUpXUGJ6OWgrb3ZsQ3hxMERCaUcwMGwKYnJuWTkwc3Fwb2lXVHhNS05DWEREaE5qdnR4TzNrUUlEUVZ2Yk5NQ0VibVlHK1JyV1FIdHZ1Znc5N1JLL2NUTAo5ZEtGU2JsSUlpek1JTlZ3TS9ncXRsVlZ2V1AxRUZhVXkweEc1YnZPTytTQ2UrVGxBN3J6NC9ST1JxcUU1VWdnCjdGOGZXeitvNkJNL3FmL0t3aCtXTjQyZHlSMXJPc0ZxRVZOYW1aTGpyQXpnd2pRL25xdVJSTWwyY0s2eWc2RnEKZDBPNDJ3d1lQcExVRUZ2NHhlNGEza3BSdnZoc2hOa3pSNElhY2JtYVVsbnptbGV3b0ZYVnVlRWJsdmlCSEpvVgoxT1VDNnFmTGtDamZDRXY0NzBLcjV2RGU1WS9sLzdqOEVZajdhL3dhMisra3ErN3hkK2JqL0REZWQ4NWZtM1lrCmRoZnA3YkdYS200S2JQTHprU3BpWVdiRStFYkFyTHRJazYyZXhqY0p2SlBkb3hNVHhnYmRlbHpsL3NuUExyZGcKdzBvR3VUVEJmeFNNS3M3NjdOM0cxcTV0ejBtd0ZwSXFJUXRYVVNtYUorOXA3SWtwV2NUaExueVlZbzFJcFdtLwpaSHRqelpNUVZBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
Name: harbor-669319617-ssl
Namespace: vmware-system-registry-669319617
Name: vmClass
Value: best-effort-small
Name: user
Value:
Password Secret:
Key: ssh-passwordkey
Name: tkg2-cluster-1-ssh-password-hashed
Ssh Authorized Key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDDFotUjAkEo5QwEnhvlze89rUD6odw6SESPh9sZVI6KuyZ2ssD2ENu3uLAqfh3KoeSBo7CDwkObZDheXv6Jy4QIAd5Na7CH7qczn86Rcp3HtvNHzbZEUrp9+mS1CmYHsdmLya7B4p5vfajpRGGQ+wYVhyNfYSMq33/hIhOmBvPrw+l2U4uffttB8hH1RGhMLuj571KCksVyck3P1eB6PY6pWIMa8rOOeXiRa7f6gfXTCfEjgPO4pJvy9YGC8UxBapp7pH/mu/VqBoUbbwOjVRm3qco4Dfpnx7e4okPFuWpQwaM2zpMCiyqTW721sCuLC7jEHbnywAnbV4ohEUF4RButbjepTNjpxEU8Xjt1dqUh4CLMa5XARjRopLmb0gcdZUT1FwJhj80aChDu/SOkS//AaMCZZom8di7Uf4+JIVt8bb1QnMRqqV7Ddc6YQ4tB2GXNyTyQ4Dfl5Y8YP5FnZa+HRLH4CYFwBx5fBfzguTKHwtcIDZ8qkeaZ+ZZm9cP5lIkwJ5XARK5IqxdqpyqNbcsKjrHOBCvUIW+DUvaMulWn8fosf75bVNvN1B3cqbvZUGQJd6n+gY3jyNpvvxzm10stUtrHlj8xetwZDGV53cmjyBvfbtz5Xn1gc8ABFKSGDUbHw/V2Dy8JcI5wzDb0qRY5GTlr9cw1R2cYEPzN4jC7w==
Name: nodePoolTaints
Value:
Version: v1.23.8+vmware.2
Workers:
Machine Deployments:
Class: node-pool
Metadata:
Name: tkg2-cluster-1-nodepool-1
Replicas: 2
Variables:
Overrides:
Name: vmClass
Value: best-effort-medium
Status:
Conditions:
Last Transition Time: 2023-04-04T16:58:07Z
Message: Scaling up control plane to 1 replicas (actual 0)
Reason: ScalingUp
Severity: Warning
Status: False
Type: Ready
Last Transition Time: 2023-04-04T16:57:57Z
Message: Waiting for control plane provider to indicate the control plane has been initialized
Reason: WaitingForControlPlaneProviderInitialized
Severity: Info
Status: False
Type: ControlPlaneInitialized
Last Transition Time: 2023-04-04T16:58:07Z
Message: Scaling up control plane to 1 replicas (actual 0)
Reason: ScalingUp
Severity: Warning
Status: False
Type: ControlPlaneReady
Last Transition Time: 2023-04-04T16:58:05Z
Status: True
Type: InfrastructureReady
Last Transition Time: 2023-04-04T16:57:57Z
Status: True
Type: TopologyReconciled
Last Transition Time: 2023-04-04T16:57:57Z
Reason: AlreadyUpToDate
Severity: Info
Status: False
Type: UpdatesAvailable
Failure Domains:
domain-c1006:
Control Plane: true
Infrastructure Ready: true
Observed Generation: 6
Phase: Provisioned
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal TopologyCreate 4m24s topology/cluster Created "VSphereCluster/tkg2-cluster-1-nfsdl"
Normal TopologyCreate 4m24s topology/cluster Created "VSphereMachineTemplate/tkg2-cluster-1-control-plane-j9gpd"
Normal TopologyCreate 4m24s topology/cluster Created "KubeadmControlPlane/tkg2-cluster-1-7qk5w"
Normal TopologyUpdate 4m24s topology/cluster Updated "Cluster/tkg2-cluster-1"
Normal TopologyCreate 4m23s topology/cluster Created "VSphereMachineTemplate/tkg2-cluster-1-tkg2-cluster-1-nodepool-1-infra-td77s"
Normal TopologyCreate 4m23s topology/cluster Created "KubeadmConfigTemplate/tkg2-cluster-1-tkg2-cluster-1-nodepool-1-bootstrap-x6fcp"
Normal TopologyCreate 4m23s topology/cluster Created "MachineDeployment/tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6"
Normal TopologyUpdate 4m22s topology/cluster Updated "VSphereCluster/tkg2-cluster-1-nfsdl"
After some time, the control plane node will be powered on and the worker nodes will be created.



You can see that the node is sized per the best-effort-small VM Class that was defined earlier.

After a little more time, the worker nodes will also be powered on.




Inspect the cluster
To access the new TKC, you’ll need to re-login but also pass some extra information regarding the cluster to the kubectl vsphere login
command.
kubectl vsphere login --server wcp.corp.vmw -u vmwadmin@corp.vmw --tanzu-kubernetes-cluster-namespace tkg2-cluster-namespace --tanzu-kubernetes-cluster-name tkg2-cluster-1
Logged in successfully.
You have access to the following contexts:
test
tkg2-cluster-1
tkg2-cluster-namespace
velero
wcp.corp.vmw
If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.
To change context, use `kubectl config use-context <workload name>`
There is a new context for the TKC with the name of the cluster, tkg2-cluster-1. Passing the --tanzu-kubernetes-cluster-namespace
and --tanzu-kubernetes-cluster-name
parameters to the kubectl vsphere login
command automatically sets this as the current context.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
test 10.40.14.66 wcp:10.40.14.66:vmwadmin@corp.vmw test
* tkg2-cluster-1 10.40.14.69 wcp:10.40.14.69:vmwadmin@corp.vmw
tkg2-cluster-namespace 10.40.14.66 wcp:10.40.14.66:vmwadmin@corp.vmw tkg2-cluster-namespace
velero 10.40.14.66 wcp:10.40.14.66:vmwadmin@corp.vmw velero
wcp.corp.vmw wcp.corp.vmw wcp:wcp.corp.vmw:vmwadmin@corp.vmw
You can check out the nodes and some resources created in the cluster.
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
tkg2-cluster-1-7qk5w-rlhgc Ready control-plane,master 9m56s v1.23.8+vmware.2 10.244.0.98 <none> VMware Photon OS/Linux 4.19.256-4.ph3-esx containerd://1.6.6
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-j6lk7 Ready <none> 5m11s v1.23.8+vmware.2 10.244.0.100 <none> VMware Photon OS/Linux 4.19.256-4.ph3-esx containerd://1.6.6
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-lfwrh Ready <none> 5m15s v1.23.8+vmware.2 10.244.0.99 <none> VMware Photon OS/Linux 4.19.256-4.ph3-esx containerd://1.6.6
kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system antrea-agent-64n7w 2/2 Running 0 5m46s 10.244.0.100 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-j6lk7 <none> <none>
kube-system antrea-agent-fvss8 2/2 Running 0 8m55s 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
kube-system antrea-agent-z5ztn 2/2 Running 0 5m50s 10.244.0.99 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-lfwrh <none> <none>
kube-system antrea-controller-5995dd698-p64rn 1/1 Running 0 8m55s 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
kube-system coredns-7d8f74b498-fctxv 1/1 Running 0 10m 192.168.0.3 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
kube-system coredns-7d8f74b498-vsjfv 1/1 Running 0 10m 192.168.0.2 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
kube-system docker-registry-tkg2-cluster-1-7qk5w-rlhgc 1/1 Running 0 10m 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
kube-system docker-registry-tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-j6lk7 1/1 Running 0 5m45s 10.244.0.100 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-j6lk7 <none> <none>
kube-system docker-registry-tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-lfwrh 1/1 Running 0 5m50s 10.244.0.99 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-lfwrh <none> <none>
kube-system etcd-tkg2-cluster-1-7qk5w-rlhgc 1/1 Running 0 10m 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
kube-system kube-apiserver-tkg2-cluster-1-7qk5w-rlhgc 1/1 Running 0 10m 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
kube-system kube-controller-manager-tkg2-cluster-1-7qk5w-rlhgc 1/1 Running 0 10m 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
kube-system kube-proxy-4msdh 1/1 Running 0 5m46s 10.244.0.100 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-j6lk7 <none> <none>
kube-system kube-proxy-bbtnk 1/1 Running 0 10m 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
kube-system kube-proxy-c4768 1/1 Running 0 5m50s 10.244.0.99 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-lfwrh <none> <none>
kube-system kube-scheduler-tkg2-cluster-1-7qk5w-rlhgc 1/1 Running 0 10m 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
kube-system metrics-server-67768b986b-jl8xv 1/1 Running 0 8m39s 192.168.2.2 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-j6lk7 <none> <none>
secretgen-controller secretgen-controller-5bd76b79d9-xxrlp 1/1 Running 0 8m36s 192.168.2.3 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-j6lk7 <none> <none>
tkg-system kapp-controller-5785c8b8c4-6s7v6 2/2 Running 0 9m50s 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
tkg-system tanzu-capabilities-controller-manager-6dfb57584f-fpwgf 1/1 Running 3 (80s ago) 8m10s 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
vmware-system-auth guest-cluster-auth-svc-8m2qp 1/1 Running 0 8m11s 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
vmware-system-cloud-provider guest-cluster-cloud-provider-668ffcb7c9-blbbc 1/1 Running 0 9m1s 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
vmware-system-csi vsphere-csi-controller-5dc5fb9848-zwrlz 6/6 Running 0 8m59s 192.168.0.4 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
vmware-system-csi vsphere-csi-node-j4qlr 3/3 Running 2 (7m40s ago) 8m59s 10.244.0.98 tkg2-cluster-1-7qk5w-rlhgc <none> <none>
vmware-system-csi vsphere-csi-node-nt6nv 3/3 Running 0 5m50s 10.244.0.99 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-lfwrh <none> <none>
vmware-system-csi vsphere-csi-node-z9cqg 3/3 Running 0 5m46s 10.244.0.100 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-89kb6-6db9c99fcc-j6lk7 <none> <none>
When working in the supervisor cluster, it was impossible to create a namesapce via kubectl create ns
as a supervisor namespace can only be created in the vSphere UI. Now that we’re working with a standard kubernetes cluster, you can create a namespace.
kubectl create ns cjl
namespace/cjl created
kubectl get ns
NAME STATUS AGE
cjl Active 2s
default Active 11m
kube-node-lease Active 11m
kube-public Active 11m
kube-system Active 11m
secretgen-controller Active 9m22s
tkg-system Active 10m
vmware-system-auth Active 11m
vmware-system-cloud-provider Active 10m
vmware-system-csi Active 10m
vmware-system-tkg Active 11m
There are also no restrictions on getting cluster resources like there were in the supervisor cluster.
kubectl get secrets -A
NAMESPACE NAME TYPE DATA AGE
cjl default-token-vg8d9 kubernetes.io/service-account-token 3 2m11s
default default-token-nnzg2 kubernetes.io/service-account-token 3 13m
kube-node-lease default-token-d8gk7 kubernetes.io/service-account-token 3 13m
kube-public default-token-cnlfn kubernetes.io/service-account-token 3 13m
kube-system antctl-token-bkdt2 kubernetes.io/service-account-token 3 11m
kube-system antrea-agent-token-t86zl kubernetes.io/service-account-token 3 11m
...
Looking in NSX, you can see that a new segment was created for the TKC.

And you can see that the three nodes in the TKC are the only objects using this segment.

Similarly, in the vSphere Client, you can see the same segment exists as portgroup on the vDS, and that there are three ports in use corresponding to the three nodes (VMs).

There is also a new load balancer created for the TKC.

Drilling down into the virtual server associated with the load balancer you can see that 10.40.14.69 IP address, with corresponds to the address shown for the tkg2-cluster-1 context when we ran kubectl config get-contexts
earlier. This is the laod-balanced API endpoint for the TKC.

And investigating the server pool members for the load balancer you can see IP address of the sole member is 10.244.0.98. This corresponds to the control plane node’s IP address, as shown earlier in the kubectl get nodes -o wide
output.

Delete a TKC
Deleting a TKC is just as easy as creating it.
kubectl delete -f tkg2-cluster-1.yaml
tanzukubernetescluster.run.tanzu.vmware.com "tkg2-cluster-1" deleted
You’ll see vSphere resource getting removed fairly quickly.

tanzu
CLI install
There is a second way that you can create a TKC. What was just noted uses kubectl
to create a TanzuKubernetsCluster (TKC) resource in a supervisor namespace. You can also use the tanzu
CLI to create a cluster resource in a supervisor namespace. The end result is the same but cluster definition is slightly different than the TKC definition.
cluster specification
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: tkg2-cluster-1
namespace: tkg2-cluster-namespace
spec:
clusterNetwork:
services:
cidrBlocks: ["100.64.0.0/13"]
pods:
cidrBlocks: ["100.96.0.0/11"]
serviceDomain: "cluster.local"
topology:
class: tanzukubernetescluster
version: v1.23.8---vmware.2-tkg.2-zshippable
controlPlane:
replicas: 1
workers:
machineDeployments:
- class: node-pool
name: tkg2-cluster-1-nodepool-1
replicas: 2
variables:
overrides:
- name: vmClass
value: best-effort-medium
variables:
- name: vmClass
value: best-effort-small
- name: storageClass
value: k8s-policy
- name: defaultStorageClass
value: k8s-policy
There are more parameters here than in the TKC definition as they are required for the cluster creation via the tanzu
CLI whereas the TKC creation via kubectl
will assume default values for many of these parameters.
Install the tanzu
CLI
You can download the tanzu
CLI from Customer Connect. You’ll find it under the Tanzu Kubernetes Grid product and there are OS-specific versions available. You should have a .tar.gz file whose contents you’ll need to extract on a system with access to the cluster.
tar -zxvf tanzu-cli-bundle-linux-amd64.tar.gz
cli/
cli/core/
cli/core/v0.28.1/
cli/core/v0.28.1/tanzu-core-linux_amd64
cli/tanzu-framework-plugins-standalone-linux-amd64.tar.gz
cli/tanzu-framework-plugins-context-linux-amd64.tar.gz
cli/ytt-linux-amd64-v0.43.1+vmware.1.gz
cli/kapp-linux-amd64-v0.53.2+vmware.1.gz
cli/imgpkg-linux-amd64-v0.31.1+vmware.1.gz
cli/kbld-linux-amd64-v0.35.1+vmware.1.gz
cli/vendir-linux-amd64-v0.30.1+vmware.1.gz
This is obviously for a linux system so the tanzu-core-linux_amd64 file needs to be made executable (and moved to a location that is more easily accessible).
mv cli/core/v0.28.1/tanzu-core-linux_amd64 /usr/local/bin/tanzu
chmod +x /usr/local/bin/tanzu
You can now run the tanzu init
command to do the initial configuration of the tanzu
CLI.
tanzu init
ℹ Checking for required plugins...
ℹ Installing plugin 'secret:v0.28.1' with target 'kubernetes'
ℹ Installing plugin 'telemetry:v0.28.1' with target 'kubernetes'
ℹ Installing plugin 'isolated-cluster:v0.28.1'
ℹ Installing plugin 'login:v0.28.1'
ℹ Installing plugin 'management-cluster:v0.28.1' with target 'kubernetes'
ℹ Installing plugin 'package:v0.28.1' with target 'kubernetes'
ℹ Installing plugin 'pinniped-auth:v0.28.1'
ℹ Successfully installed all required plugins
✔ successfully initialized CLI
And you will need to use the tanzu login
command to access the supervisor cluster.
kubectl config use-context wcp.corp.vmw
Switched to context "wcp.corp.vmw".
tanzu login
? Select login type Local kubeconfig
? Enter path to kubeconfig (if any) /home/ubuntu/.kube/config
? Enter kube context to use wcp.corp.vmw
? Give the server a name wcp
✔ successfully logged in to management cluster using the kubeconfig wcp
ℹ Checking for required plugins...
ℹ Installing plugin 'cluster:v0.25' with target 'kubernetes'
ℹ Installing plugin 'feature:v0.25' with target 'kubernetes'
ℹ Installing plugin 'kubernetes-release:v0.25' with target 'kubernetes'
ℹ Installing plugin 'namespaces:v1.0.0' with target 'kubernetes'
ℹ Successfully installed all required plugins
Create the cluster
You can use the tanzu cluster create
command to create the cluster now. In this example, I’ve passed the -v6
parameter to get much more verbose output.
tanzu cluster create -f tanzu-tkg2-cluster-1.yaml -v6
compatibility file (/home/ubuntu/.config/tanzu/tkg/compatibility/tkg-compatibility.yaml) already exists, skipping download
BOM files inside /home/ubuntu/.config/tanzu/tkg/bom already exists, skipping download
cluster log directory does not exist. Creating new one at "/home/ubuntu/.config/tanzu/tkg/logs"
You are trying to create a cluster with kubernetes version '' on vSphere with Tanzu, Please make sure virtual machine image for the same is available in the cluster content library.
Do you want to continue? [y/N]: y
Validating configuration...
Waiting for the Tanzu Kubernetes Cluster service for vSphere workload cluster
waiting for cluster to be initialized...
tanzu cluster create output
[zero or multiple KCP objects found for the given cluster, 0 tkg2-cluster-1 tkg2-cluster-namespace, no MachineDeployment objects found for the given cluster]
[zero or multiple KCP objects found for the given cluster, 0 tkg2-cluster-1 tkg2-cluster-namespace, no MachineDeployment objects found for the given cluster], retrying
cluster control plane is still being initialized: WaitingForControlPlane
cluster control plane is still being initialized: WaitingForControlPlane, retrying
cluster control plane is still being initialized: WaitingForKubeadmInit
cluster control plane is still being initialized: WaitingForKubeadmInit, retrying
cluster control plane is still being initialized: WaitingForKubeadmInit, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: VMProvisionStarted @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: WaitingForNetworkAddress @ Machine/tkg2-cluster-1-wb7pg-fbb65
cluster control plane is still being initialized: WaitingForNetworkAddress @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: WaitingForNetworkAddress @ Machine/tkg2-cluster-1-wb7pg-fbb65, retrying
cluster control plane is still being initialized: WaitingForKubeadmInit
cluster control plane is still being initialized: WaitingForKubeadmInit, retrying
getting secret for cluster
waiting for resource tkg2-cluster-1-kubeconfig of type *v1.Secret to be up and running
waiting for cluster nodes to be available...
waiting for resource tkg2-cluster-1 of type *v1beta1.Cluster to be up and running
waiting for resources type *v1beta1.KubeadmControlPlaneList to be up and running
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
control-plane is still creating replicas, DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
waiting for resources type *v1beta1.MachineDeploymentList to be up and running
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=0 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=1 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=1 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=1 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=1 UpdatedReplicas=2, retrying
worker nodes are still being created for MachineDeployment 'tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7', DesiredReplicas=2 Replicas=2 ReadyReplicas=1 UpdatedReplicas=2, retrying
waiting for resources type *v1beta1.MachineList to be up and running
waiting for addons core packages installation...
getting ClusterBootstrap object for cluster: tkg2-cluster-1
waiting for resource tkg2-cluster-1 of type *v1alpha3.ClusterBootstrap to be up and running
waiting for package: 'tkg2-cluster-1-kapp-controller'
waiting for resource tkg2-cluster-1-kapp-controller of type *v1alpha1.PackageInstall to be up and running
successfully reconciled package: 'tkg2-cluster-1-kapp-controller' in namespace: 'tkg2-cluster-namespace'
waiting for package: 'tkg2-cluster-1-antrea'
waiting for package: 'tkg2-cluster-1-vsphere-pv-csi'
waiting for package: 'tkg2-cluster-1-vsphere-cpi'
waiting for resource tkg2-cluster-1-vsphere-cpi of type *v1alpha1.PackageInstall to be up and running
waiting for resource tkg2-cluster-1-antrea of type *v1alpha1.PackageInstall to be up and running
waiting for resource tkg2-cluster-1-vsphere-pv-csi of type *v1alpha1.PackageInstall to be up and running
waiting for 'tkg2-cluster-1-antrea' Package to be installed, retrying
successfully reconciled package: 'tkg2-cluster-1-vsphere-cpi' in namespace: 'vmware-system-tkg'
successfully reconciled package: 'tkg2-cluster-1-vsphere-pv-csi' in namespace: 'vmware-system-tkg'
waiting for 'tkg2-cluster-1-antrea' Package to be installed, retrying
successfully reconciled package: 'tkg2-cluster-1-antrea' in namespace: 'vmware-system-tkg'
Workload cluster 'tkg2-cluster-1' created
As with the previous example of creating a TKC, you will need to re-login with the kubectl vsphere login
command and pass the --tanzu-kubernetes-cluster-namespace
and --tanzu-kubernetes-cluster-name
parameters to gain access to the new cluster.
kubectl vsphere login --server wcp.corp.vmw -u vmwadmin@corp.vmw --tanzu-kubernetes-cluster-namespace tkg2-cluster-namespace --tanzu-kubernetes-cluster-name tkg2-cluster-1
Logged in successfully.
You have access to the following contexts:
test
tkg2-cluster-1
tkg2-cluster-namespace
velero
wcp.corp.vmw
If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.
To change context, use `kubectl config use-context <workload name>`
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
test 10.40.14.66 wcp:10.40.14.66:vmwadmin@corp.vmw test
* tkg2-cluster-1 10.40.14.69 wcp:10.40.14.69:vmwadmin@corp.vmw
tkg2-cluster-namespace 10.40.14.66 wcp:10.40.14.66:vmwadmin@corp.vmw tkg2-cluster-namespace
velero 10.40.14.66 wcp:10.40.14.66:vmwadmin@corp.vmw velero
wcp.corp.vmw wcp.corp.vmw wcp:wcp.corp.vmw:vmwadmin@corp.vmw
Inspect the cluster
And a quick look at the nodes and objects in the cluster will show very similar results to what was seen earlier.
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-dk9zw Ready <none> 15m v1.23.8+vmware.2 10.244.0.100 <none> VMware Photon OS/Linux 4.19.256-4.ph3-esx containerd://1.6.6
tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-shw6v Ready <none> 16m v1.23.8+vmware.2 10.244.0.99 <none> VMware Photon OS/Linux 4.19.256-4.ph3-esx containerd://1.6.6
tkg2-cluster-1-wb7pg-fbb65 Ready control-plane,master 23m v1.23.8+vmware.2 10.244.0.98 <none> VMware Photon OS/Linux 4.19.256-4.ph3-esx containerd://1.6.6
kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system antrea-agent-dxvtg 2/2 Running 0 15m 10.244.0.100 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-dk9zw <none> <none>
kube-system antrea-agent-gwrcs 2/2 Running 0 15m 10.244.0.99 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-shw6v <none> <none>
kube-system antrea-agent-xvc7b 2/2 Running 1 (20m ago) 21m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
kube-system antrea-controller-546b45f5db-g82nt 1/1 Running 0 21m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
kube-system coredns-7d8f74b498-r4lvv 1/1 Running 0 22m 100.96.0.2 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
kube-system coredns-7d8f74b498-wmgqc 1/1 Running 0 22m 100.96.0.4 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
kube-system docker-registry-tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-dk9zw 1/1 Running 0 15m 10.244.0.100 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-dk9zw <none> <none>
kube-system docker-registry-tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-shw6v 1/1 Running 0 15m 10.244.0.99 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-shw6v <none> <none>
kube-system docker-registry-tkg2-cluster-1-wb7pg-fbb65 1/1 Running 0 22m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
kube-system etcd-tkg2-cluster-1-wb7pg-fbb65 1/1 Running 0 22m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
kube-system kube-apiserver-tkg2-cluster-1-wb7pg-fbb65 1/1 Running 0 22m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
kube-system kube-controller-manager-tkg2-cluster-1-wb7pg-fbb65 1/1 Running 0 22m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
kube-system kube-proxy-d9g7r 1/1 Running 0 15m 10.244.0.99 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-shw6v <none> <none>
kube-system kube-proxy-lmgfg 1/1 Running 0 22m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
kube-system kube-proxy-nmkmj 1/1 Running 0 15m 10.244.0.100 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-dk9zw <none> <none>
kube-system kube-scheduler-tkg2-cluster-1-wb7pg-fbb65 1/1 Running 0 22m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
kube-system metrics-server-f5b998679-dk9sp 1/1 Running 0 21m 100.96.1.3 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-shw6v <none> <none>
secretgen-controller secretgen-controller-5ffcb9b874-nf9ct 1/1 Running 0 20m 100.96.1.2 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-shw6v <none> <none>
tkg-system kapp-controller-777dfb959-fsnbz 2/2 Running 0 22m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
tkg-system tanzu-capabilities-controller-manager-6f58cf86d8-2ss62 1/1 Running 6 (4m43s ago) 20m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
vmware-system-auth guest-cluster-auth-svc-xhjdw 1/1 Running 0 20m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
vmware-system-cloud-provider guest-cluster-cloud-provider-6fb77b567d-p2pbg 1/1 Running 0 21m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
vmware-system-csi vsphere-csi-controller-cff74c867-pvng7 6/6 Running 0 21m 100.96.0.3 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
vmware-system-csi vsphere-csi-node-p8vpw 3/3 Running 0 15m 10.244.0.99 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-shw6v <none> <none>
vmware-system-csi vsphere-csi-node-pr8r2 3/3 Running 0 15m 10.244.0.100 tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-dk9zw <none> <none>
vmware-system-csi vsphere-csi-node-s6nkw 3/3 Running 3 (19m ago) 21m 10.244.0.98 tkg2-cluster-1-wb7pg-fbb65 <none> <none>
kubectl describe cluster tkg2-cluster-1
kubectl describe cluster tkg2-cluster-1
Name: tkg2-cluster-1
Namespace: tkg2-cluster-namespace
Labels: cluster.x-k8s.io/cluster-name=tkg2-cluster-1
run.tanzu.vmware.com/tkr=v1.23.8---vmware.2-tkg.2-zshippable
topology.cluster.x-k8s.io/owned=
Annotations: TKGOperationLastObservedTimestamp: 2023-04-04 17:43:18.073851108 +0000 UTC
tkg.tanzu.vmware.com/skip-tls-verify:
tkg.tanzu.vmware.com/tkg-http-proxy:
tkg.tanzu.vmware.com/tkg-https-proxy:
tkg.tanzu.vmware.com/tkg-ip-family:
tkg.tanzu.vmware.com/tkg-no-proxy:
tkg.tanzu.vmware.com/tkg-proxy-ca-cert:
API Version: cluster.x-k8s.io/v1beta1
Kind: Cluster
Metadata:
Creation Timestamp: 2023-04-04T17:38:47Z
Finalizers:
cluster.cluster.x-k8s.io
tkg.tanzu.vmware.com/addon
Generation: 4
Managed Fields:
API Version: cluster.x-k8s.io/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:clusterNetwork:
.:
f:pods:
.:
f:cidrBlocks:
f:serviceDomain:
f:services:
.:
f:cidrBlocks:
f:topology:
.:
f:class:
f:controlPlane:
.:
f:replicas:
f:version:
f:workers:
.:
f:machineDeployments:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2023-04-04T17:38:47Z
API Version: cluster.x-k8s.io/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:TKGOperationLastObservedTimestamp:
Manager: v0.25_e19826168ecbb7502219f65c47ec592d9356406461bbe8e012884c425c312583_kubernetes
Operation: Update
Time: 2023-04-04T17:39:03Z
API Version: cluster.x-k8s.io/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:tkg.tanzu.vmware.com/skip-tls-verify:
f:tkg.tanzu.vmware.com/tkg-http-proxy:
f:tkg.tanzu.vmware.com/tkg-https-proxy:
f:tkg.tanzu.vmware.com/tkg-ip-family:
f:tkg.tanzu.vmware.com/tkg-no-proxy:
f:tkg.tanzu.vmware.com/tkg-proxy-ca-cert:
f:finalizers:
.:
v:"cluster.cluster.x-k8s.io":
v:"tkg.tanzu.vmware.com/addon":
f:labels:
f:cluster.x-k8s.io/cluster-name:
f:topology.cluster.x-k8s.io/owned:
f:spec:
f:controlPlaneEndpoint:
f:host:
f:port:
f:controlPlaneRef:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:infrastructureRef:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:topology:
f:variables:
Manager: manager
Operation: Update
Time: 2023-04-04T17:44:32Z
API Version: cluster.x-k8s.io/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:controlPlaneReady:
f:failureDomains:
.:
f:domain-c1006:
.:
f:controlPlane:
f:infrastructureReady:
f:observedGeneration:
f:phase:
Manager: manager
Operation: Update
Subresource: status
Time: 2023-04-04T17:45:58Z
Resource Version: 5683199
UID: ae9d6def-c90f-4c1f-989a-72e40206d1ee
Spec:
Cluster Network:
Pods:
Cidr Blocks:
100.96.0.0/11
Service Domain: cluster.local
Services:
Cidr Blocks:
100.64.0.0/13
Control Plane Endpoint:
Host: 10.40.14.69
Port: 6443
Control Plane Ref:
API Version: controlplane.cluster.x-k8s.io/v1beta1
Kind: KubeadmControlPlane
Name: tkg2-cluster-1-wb7pg
Namespace: tkg2-cluster-namespace
Infrastructure Ref:
API Version: vmware.infrastructure.cluster.x-k8s.io/v1beta1
Kind: VSphereCluster
Name: tkg2-cluster-1-4ztx5
Namespace: tkg2-cluster-namespace
Topology:
Class: tanzukubernetescluster
Control Plane:
Metadata:
Replicas: 1
Variables:
Name: defaultRegistrySecret
Value:
Data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUhLRENDQlJDZ0F3SUJBZ0lUSWdBQUFCYnNoeVRQaTY0YlpRQUFBQUFBRmpBTkJna3Foa2lHOXcwQkFRc0YKQURCTU1STXdFUVlLQ1pJbWlaUHlMR1FCR1JZRGRtMTNNUlF3RWdZS0NaSW1pWlB5TEdRQkdSWUVZMjl5Y0RFZgpNQjBHQTFVRUF4TVdZMjl1ZEhKdmJHTmxiblJsY2k1amIzSndMblp0ZHpBZUZ3MHlNekF5TVRZeE56UTFNek5hCkZ3MHpNekF5TVRNeE56UTFNek5hTUZReEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUlFd3BEWVd4cFptOXkKYm1saE1SSXdFQVlEVlFRSEV3bFFZV3h2SUVGc2RHOHhEekFOQmdOVkJBb1RCbFpOZDJGeVpURUxNQWtHQTFVRQpBeE1DUTBFd2dnR2lNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJqd0F3Z2dHS0FvSUJnUUNHM2ptcFRwc0tkcmlpCi82OUN0UGpOczNoaVczdkhqL1dFRWw5OGR5VUNCL0lxZXE2ZVdIMk9uaHRUSHlVOXVLT1hXbFphUEo2VkRUOXIKVUszMlhDRTYzVmpNSFhBTWVDekJnY21GVEtncG1CeG5sQzEwYmx5RkFoeWwycXF0UTdJU0FYYTdzRElEQkNQTgpCd1JUenQxeE4yQ0gvSkpHYnlhNFZKVy9oazdWNEc4a09RL3JRYUpWbGloNVFVbStEQWUwdFgweW1hVms4d2JyCk56STZnaTAyYzkxczdFcUx3OGlnWkRpbmNzMStKMEY2bW1ydXpSbUh4c2s0NHhJaStiUUFCVVFRbzJBM29OcVMKeVM0UWZZdGFUcnRHQmFwcHZoMDNibHkzcVFGWHBWY3hyZ3N5UytsUGNaaEt0V1U4L2RRWFZOUnJxVllJQlB6egpiYlMzUWpPSk5nYVovbkp3WGNReXNnNWVnVzMwS1NjMXVld2UxMXlUZ0RpUkx4dzc2WC9zYldlVnFoVmdzWFdCCk03b0NONlpCOHphNEFRSGg4dXJtRTNmcUNKM3U1akpuY3p3QUI5YkZnNnRzKzVIZFBZVVU1WFNXazJZVW1pTUYKY1l3SzliYkhMZzVsZmFERkhSLzAwaVpINy9IUXFITW5aZUN6TlhGZG1ZYUpOYy9QeGU4Q0F3RUFBYU9DQW5rdwpnZ0oxTUIwR0ExVWREZ1FXQkJUdW1vb3IvYklVZFBGUFpKMm1iZVhDMGFpSU9EQXlCZ05WSFJFRUt6QXBnUTVsCmJXRnBiRUJoWTIxbExtTnZiWWNFd0todUZvSVJkbU56WVMwd01XRXVZMjl5Y0M1MmJYY3dId1lEVlIwakJCZ3cKRm9BVXhvUHpsbEFKYWVkbE1xMldqWVdCNEY4a0djY3dnZGNHQTFVZEh3U0J6ekNCekRDQnlhQ0J4cUNCdzRhQgp3R3hrWVhBNkx5OHZRMDQ5WTI5dWRISnZiR05sYm5SbGNpNWpiM0p3TG5adGR5eERUajFqYjI1MGNtOXNZMlZ1CmRHVnlMRU5PUFVORVVDeERUajFRZFdKc2FXTWxNakJMWlhrbE1qQlRaWEoyYVdObGN5eERUajFUWlhKMmFXTmwKY3l4RFRqMURiMjVtYVdkMWNtRjBhVzl1TEVSRFBXTnZjbkFzUkVNOWRtMTNQMk5sY25ScFptbGpZWFJsVW1WMgpiMk5oZEdsdmJreHBjM1EvWW1GelpUOXZZbXBsWTNSRGJHRnpjejFqVWt4RWFYTjBjbWxpZFhScGIyNVFiMmx1CmREQ0J4UVlJS3dZQkJRVUhBUUVFZ2Jnd2diVXdnYklHQ0NzR0FRVUZCekFDaG9HbGJHUmhjRG92THk5RFRqMWoKYjI1MGNtOXNZMlZ1ZEdWeUxtTnZjbkF1ZG0xM0xFTk9QVUZKUVN4RFRqMVFkV0pzYVdNbE1qQkxaWGtsTWpCVApaWEoyYVdObGN5eERUajFUWlhKMmFXTmxjeXhEVGoxRGIyNW1hV2QxY21GMGFXOXVMRVJEUFdOdmNuQXNSRU05CmRtMTNQMk5CUTJWeWRHbG1hV05oZEdVL1ltRnpaVDl2WW1wbFkzUkRiR0Z6Y3oxalpYSjBhV1pwWTJGMGFXOXUKUVhWMGFHOXlhWFI1TUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNRHdHQ1NzRwpBUVFCZ2pjVkJ3UXZNQzBHSlNzR0FRUUJnamNWQ0lIMmhsV0Jtc3drZ2IyUEk0R2ErVCtVdlZjOGc1ZkVib2ZCCnlCUUNBV1FDQVFJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dJQkFOSVRlM2d3UUxWREZDd3lkdmZrSkJ4cWFCMUsKSHpBaXFUQm1hUjZKdXhJU1JwSnAzbE11ZTZQcFBucVZVVEhEZW9FdUlGcldlRm1ha0tvR2VFaUlxaytIZFMvMApvRXVzaVBNakkzN2c0UVlXMDlaSllXelU1OFZ2dms1d3J1aGczcDV0amQrSStpUnM4b2N1ZUFmQ08wSGE3djdlCjg4MG1QSS9XSnc4Nzk3MWQrRENiZzRDMXNvdjNaQ3kvYWlEWWJXdFVVQWlXLzdESXZhclk1cDhoNlNIcXdRancKczdGOWhPbTEybXdybEhHNi9LL1pGU2JqdlFOQlNEQkZIUys0ZHJOdFhWYXNqN3lhV2FRckc2a0JWWE93RTRMQwpqRytYVEdXemQvUEhjWDlwU3J1Z1gvd1ErcG5aQU5va2FhNUZtT1pwbFJkdWEwYkNPOWthdk5MQ3lGakxobEhlCkhia0RHZmxISVExOUZtL3NaWmFkL1VqZHJKajFJSEJTSFJuTjRIdXVMUDFZbnp4L0J5eWF1T2xlMlU0Vm9tMmYKOVpibU5DMkpWTk1Pc2hWam05VTh0QUFEbXJkWmEvdlRNOThlNXNtN010SzExTzdXSDF2NU1xdkpUUitMb0VzaQpyVXlncjFRNUhtRmh4a3dTTFJ2OEIzQ2NRTFB6N1Q0aXdmL0pLeGZPZEc3bTVQa3B2SEhkV0JDYU4wL0UrVG9tCmNaM3BUTFpPSWlEV3VPNWkwTTNNVERIQU04anFFUG0zWTZQd0xncHlrcEJIa3JMak05ajBnUzlhUllrcnBnZGMKMzJqWUdmMFRtakYwUHJLRk92aDNPMTJIYUtqWlpYNklRNk9jYnRGVHgweHJ6eDI1YldsQzBhMDZlWnFrQTVsUgo3TmpVQUo1ZUZ3REZxOUo4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZjekNDQTF1Z0F3SUJBZ0lRVFlKSVRRM1NaNEJCUzlVelhmSkl1VEFOQmdrcWhraUc5dzBCQVFzRkFEQk0KTVJNd0VRWUtDWkltaVpQeUxHUUJHUllEZG0xM01SUXdFZ1lLQ1pJbWlaUHlMR1FCR1JZRVkyOXljREVmTUIwRwpBMVVFQXhNV1kyOXVkSEp2YkdObGJuUmxjaTVqYjNKd0xuWnRkekFlRncweU1qQXpNakV4T1RFM01qaGFGdzB6Ck56QXpNakV4T1RJM01qTmFNRXd4RXpBUkJnb0praWFKay9Jc1pBRVpGZ04yYlhjeEZEQVNCZ29Ka2lhSmsvSXMKWkFFWkZnUmpiM0p3TVI4d0hRWURWUVFERXhaamIyNTBjbTlzWTJWdWRHVnlMbU52Y25BdWRtMTNNSUlDSWpBTgpCZ2txaGtpRzl3MEJBUUVGQUFPQ0FnOEFNSUlDQ2dLQ0FnRUEyT1lLeGNrT2poZ3VmV3UxWUVuYXR2SjFNMTI3Cmd3UGJGTmoxMS9kSUNYYVBlK21qTjFIY2UwUGlTMlFhYWVBZThrSCttT0tSYTJKamFHZFhyNnJPaUI4MEtaT1IKdXcwR3pTSnlMNXc3ZXdSK05KZjMxWU82MkJEL210M3NIZU1uQ1htU0J4T1F2YjBuR2toVHIxeStyRHB2eEo4Nwp6TmN6Z2ZONTR0bzZTMzc5d2pPc0M0YmtITG5NSjVFdEpHNzhwUHFYMSsxd2NWT1VSTko2eTlCY2VqTG5veS95CkNGcFhLT1Z4S0h6eTJubnNpdEF1QmIraEQrSnh3OC9qRlFVaHhIMFZsZ3lmWENRZGVnYXNTQTlSSHRadGZwVnMKaHNoaXNqa1NsdlFtYnNFa25CWnJBZkJWSVlpZHd0M3cwNTBqVmhpVXM1UWw2dkRvdFk2R3F0enpncTBvYnY2UAo3RTlOUGVqM0J6aFBTSVV5cW5wZjU3VVdJNHpVaVJKdmJTdS9KMk1DQktId1lmemtlMWNudkxBN3ZpREVkQjkrCi9IdGs5YUc5LzFCNmRkRGZhZnJjU09XdGtUZkhXWUx2MjFvM1V3b2g5VzVPcEs5SmlrWnUvUHFucFprVWkrMkMKTCtXQ3d3L0JTMXloUXdWaWY2UHFVTWVTTHozanRxM3c2Ui9ydVVNbE8rMEU1Ly9ic2tEVDZRR3hCZ2N2TUY5bgpEbCt1MHVxSEtPZGlVdk9YQnRGMTM5SEtVclpzcTBtM1dQb2VsMi9wK2NWVkpZc3lKRy9yUnBlaDFnL1gwY0IzCjlFdVRqWDZ2bnJUK0lTOFpmQWFvSHpwbWdoMXZHdTJyMnhnUHEyRTh4NGppOUZHVjhZVGpBczYwTnc3WXhLVVcKV2dqK1lOcHhQMlN4RnFVQ0F3RUFBYU5STUU4d0N3WURWUjBQQkFRREFnR0dNQThHQTFVZEV3RUIvd1FGTUFNQgpBZjh3SFFZRFZSME9CQllFRk1hRDg1WlFDV25uWlRLdGxvMkZnZUJmSkJuSE1CQUdDU3NHQVFRQmdqY1ZBUVFECkFnRUFNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUNBUUF1dFh3T3RzbVljYmovYnMzTXlkeDBEaTltKzZVVlRFWmQKT1JSclR1cy9CTC9UTnJ5Tzd6bzJiZWN6R1BLMjZNd3FobVVaaWFGNjFqUmIzNmt4bUZQVngydVYybnA0TGJRago1TXJ4dFB6ZjJYWHk0YjdBRHFRcExndTRyUjNtWmlYR216VW9WMTdobUFoeWZTVTFxbTRGc3NYR0syeXBXc1FzCkJ3c0tYNERzSWlqSkpaYlh3S0ZhYXVxMEx0bmtnZUdXZG9FRkZXQUgweUpXUGJ6OWgrb3ZsQ3hxMERCaUcwMGwKYnJuWTkwc3Fwb2lXVHhNS05DWEREaE5qdnR4TzNrUUlEUVZ2Yk5NQ0VibVlHK1JyV1FIdHZ1Znc5N1JLL2NUTAo5ZEtGU2JsSUlpek1JTlZ3TS9ncXRsVlZ2V1AxRUZhVXkweEc1YnZPTytTQ2UrVGxBN3J6NC9ST1JxcUU1VWdnCjdGOGZXeitvNkJNL3FmL0t3aCtXTjQyZHlSMXJPc0ZxRVZOYW1aTGpyQXpnd2pRL25xdVJSTWwyY0s2eWc2RnEKZDBPNDJ3d1lQcExVRUZ2NHhlNGEza3BSdnZoc2hOa3pSNElhY2JtYVVsbnptbGV3b0ZYVnVlRWJsdmlCSEpvVgoxT1VDNnFmTGtDamZDRXY0NzBLcjV2RGU1WS9sLzdqOEVZajdhL3dhMisra3ErN3hkK2JqL0REZWQ4NWZtM1lrCmRoZnA3YkdYS200S2JQTHprU3BpWVdiRStFYkFyTHRJazYyZXhqY0p2SlBkb3hNVHhnYmRlbHpsL3NuUExyZGcKdzBvR3VUVEJmeFNNS3M3NjdOM0cxcTV0ejBtd0ZwSXFJUXRYVVNtYUorOXA3SWtwV2NUaExueVlZbzFJcFdtLwpaSHRqelpNUVZBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
Name: harbor-669319617-ssl
Namespace: vmware-system-registry-669319617
Name: vmClass
Value: best-effort-small
Name: ntp
Value: 192.168.100.1
Name: user
Value:
Password Secret:
Key: ssh-passwordkey
Name: tkg2-cluster-1-ssh-password-hashed
Ssh Authorized Key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDOM69DmZFM7lofjuF7sqlE46Vpc6/vQxgyLhJC6m12DnI2hjD15guFsqLD1gbJzgTFhYys1/Fz0yH2MW0k6NqARzclWItGNLddrf1t4l+OqRl5hZWkhdLTmaxUnqqrWBW2l/kYIBRjMmZrLwag/UCYMxm5Ht2b1EyPI8G3aAnWp7oh8tOel9cMMvRyFzbJQnir1bMfeYY1TvXqqXRoMlKS5WMUMonAYdLhJkH/ecN3VGdyz46zSMVPrQdm++iO972mzh8jCe8caj60V5EQ5+3+YSxqgaUWbRUjEgO9LOuGbFmW2BajtVsk0rQ3olWP50Xm5rMM/ep9ugtUuhWmKNhxztWWY0ID3nHKghtMIM/K/dkxpo5KhS8G4hhnhxSuCxd/imJN5UIxvyOtxzaypt9K5/HuvZZ6vADmNf1+P+ZlplGs3una7XCAJzbLr+4ZeNMFszB4RQp0K547h3vo/Zx1LETVfmBpSO8S7cM9WoW4HyXpAOaIxgDV3yH1o00aEOY6k7WodxXkeivUqPFdZigBj/VnJfcOsyLL7xvWnfu3d6WIpfkFy13azQgF7aP8e73xyN9cXfJK6ay/AfLV3XwFf67yrqQ2PdhXP/NYIvwjGhiYHa/VfNLBBEudaofYbIk7nX/ocrpSesnqd0jcujHQG69Ppdnj7pz1QQ/unP9aaw==
Name: storageClass
Value: k8s-policy
Name: defaultStorageClass
Value: k8s-policy
Name: TKR_DATA
Value:
v1.23.8+vmware.2:
Kubernetes Spec:
Coredns:
Image Tag: v1.8.6_vmware.7
Etcd:
Image Tag: v3.5.4_vmware.6
Image Repository: localhost:5000/vmware.io
Pause:
Image Tag: 3.6
Version: v1.23.8+vmware.2
Labels:
Image - Type: vmi
Os - Arch: amd64
Os - Name: photon
Os - Type: linux
Os - Version: 3
run.tanzu.vmware.com/os-image: ob-20611023-photon-3-amd64-vmi-k8s-v1.23.8---vmware.2-tkg.1-zshippable
run.tanzu.vmware.com/tkr: v1.23.8---vmware.2-tkg.2-zshippable
Vmi - Name: ob-20611023-photon-3-amd64-vmi-k8s-v1.23.8---vmware.2-tkg.1-zshippable
Os Image Ref:
Name: ob-20611023-photon-3-amd64-vmi-k8s-v1.23.8---vmware.2-tkg.1-zshippable
Name: extensionCert
Value:
Content Secret:
Key: tls.crt
Name: tkg2-cluster-1-extensions-ca
Name: clusterEncryptionConfigYaml
Value: LS0tCmFwaVZlcnNpb246IGFwaXNlcnZlci5jb25maWcuazhzLmlvL3YxCmtpbmQ6IEVuY3J5cHRpb25Db25maWd1cmF0aW9uCnJlc291cmNlczoKICAtIHJlc291cmNlczoKICAgIC0gc2VjcmV0cwogICAgcHJvdmlkZXJzOgogICAgLSBhZXNjYmM6CiAgICAgICAga2V5czoKICAgICAgICAtIG5hbWU6IGtleTEKICAgICAgICAgIHNlY3JldDogMEtEU0xSQnp5cW5oUmhIMGNUUUxVa2dVWXhiV0FVMkM4YVJHU2ZIZlZhUT0KICAgIC0gaWRlbnRpdHk6IHt9Cg==
Version: v1.23.8+vmware.2
Workers:
Machine Deployments:
Class: node-pool
Metadata:
Name: tkg2-cluster-1-nodepool-1
Replicas: 2
Variables:
Overrides:
Name: vmClass
Value: best-effort-medium
Status:
Conditions:
Last Transition Time: 2023-04-04T17:43:24Z
Status: True
Type: Ready
Last Transition Time: 2023-04-04T17:43:24Z
Status: True
Type: ControlPlaneInitialized
Last Transition Time: 2023-04-04T17:43:24Z
Status: True
Type: ControlPlaneReady
Last Transition Time: 2023-04-04T17:39:02Z
Status: True
Type: InfrastructureReady
Last Transition Time: 2023-04-04T17:38:56Z
Status: True
Type: TopologyReconciled
Last Transition Time: 2023-04-04T17:38:47Z
Message: [v1.23.8+vmware.3-tkg.1]
Status: True
Type: UpdatesAvailable
Control Plane Ready: true
Failure Domains:
domain-c1006:
Control Plane: true
Infrastructure Ready: true
Observed Generation: 4
Phase: Provisioned
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal TopologyCreate 29m topology/cluster Created "VSphereCluster/tkg2-cluster-1-4ztx5"
Normal TopologyCreate 29m topology/cluster Created "VSphereMachineTemplate/tkg2-cluster-1-control-plane-whgf9"
Normal TopologyCreate 29m topology/cluster Created "KubeadmControlPlane/tkg2-cluster-1-wb7pg"
Normal TopologyUpdate 29m topology/cluster Updated "Cluster/tkg2-cluster-1"
Normal TopologyCreate 29m topology/cluster Created "VSphereMachineTemplate/tkg2-cluster-1-tkg2-cluster-1-nodepool-1-infra-9pr66"
Normal TopologyCreate 29m topology/cluster Created "KubeadmConfigTemplate/tkg2-cluster-1-tkg2-cluster-1-nodepool-1-bootstrap-sq6h9"
Normal TopologyCreate 29m topology/cluster Created "MachineDeployment/tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7"
Normal TopologyUpdate 29m topology/cluster Updated "VSphereCluster/tkg2-cluster-1-4ztx5"
You can also use the tanzu
CLI to see some information about the cluster.
tanzu cluster list
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN TKR
tkg2-cluster-1 tkg2-cluster-namespace running 1/1 2/2 v1.23.8+vmware.2 <none> v1.23.8---vmware.2-tkg.2-zshippable
tanzu cluster get tkg2-cluster-1 -n tkg2-cluster-namespace
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES TKR
tkg2-cluster-1 tkg2-cluster-namespace running 1/1 2/2 v1.23.8+vmware.2 <none> v1.23.8---vmware.2-tkg.2-zshippable
Details:
NAME READY SEVERITY REASON SINCE MESSAGE
/tkg2-cluster-1 True 46h
├─ClusterInfrastructure - VSphereCluster/tkg2-cluster-1-4ztx5 True 47h
├─ControlPlane - KubeadmControlPlane/tkg2-cluster-1-wb7pg True 46h
│ └─Machine/tkg2-cluster-1-wb7pg-fbb65 True 46h
└─Workers
└─MachineDeployment/tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7 True 46h
├─Machine/tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-dk9zw True 46h
└─Machine/tkg2-cluster-1-tkg2-cluster-1-nodepool-1-mf8x7-6bf7c75c97-shw6v True 46h
Delete the cluster with the tanzu
CLI
The tanzu
CLI has a cluster delete
option which makes removing the cluster very easy.
tanzu cluster delete tkg2-cluster-1 -n tkg2-cluster-namespace
Deleting workload cluster 'tkg2-cluster-1'. Are you sure? [y/N]: y
Workload cluster 'tkg2-cluster-1' is being deleted
