Much of this is based off of the instructions noted at Installing Tanzu Kubernetes Grid.
We’ll need to start off with an environment where vSphere 6.7 or 7.0 (not technically supported but works with TKG) is already deployed. I’m using a vApp in vCD as it’s quick and easy to get up and running and costs practically nothing.
The first step will be to download the components needed to install TKG 1.1 from https://my.vmware.com/web/vmware/details?downloadGroup=TKG-110&productId=988&rPId=46507. This will include the CLI, the Kubernetes v1.18.2 OVA, the Load Balancer OVA and the extension manifests. The size of all of these items is only a few GB so it should be done relatively quickly. The TKG CLI executables can be renamed to just tkg
and tkg.exe
to make them easier to use. I have an Ubuntu 20.04 VM running in my lab for carrying out most of the CLI work but this can all be done from PowerShell as well.
You’ll need to create an SSH key pair, which has been documented at Create an SSH Key Pair. I have one pre-created in this environment that I use for public-key authentication with various internal systems.
Some assumptions about my environment if you want to use these steps in your own:
- My vCenter server name is vcsa-01a.corp.local
- My Datacenter name is RegionA01
- My cluster name is RegionA01-MGMT
- The management portgroup on my vDS is DSwitch-Management
- The management network in my environment is on the 192.168.110.0/24 subnet
- I have a DHCP server that will allocate IP addresses from the 192.168.100.100-250 range
Table of Contents
Create vSphere Resources
In the vSphere Client, create Resource Pools named TKG-Mgmt and TKG-Comp, create vm folders named the same
In the vSphere Client, right-click an object in the vCenter Server inventory, select Deploy OVF template.
Select Local file, click the button to upload files, and navigate to the photon-3-kube-v1.18.2+vmware.1.ova
file.
Follow the installer prompts to deploy a VM from the OVA template.
- Leave VM name as
photon-3-kube-v1.18.2+vmware.1
- Select the compute resource as RegionA01-MGMT
- Accept the end user license agreements (EULA)
- Select the map-vol datastore
- Select the Dswitch-Management network
- Click Finish to deploy the VM.
Right-click the VM and select Template > Convert to Template.
Repeat this whole process for photon-3-haproxy-v1.2.4+vmware.1.ova
UI based install
Run tkg init --ui
. The installer UI will appear in a new Chrome tab.
On the IaaS Provider page:
- vCenter Server: vcsa-01a.corp.local
- Username: administrator@vsphere.local
- Password: <your password>
Click the Connect button. If you’re installing on vSphere 7, you will get a warning where you can click the Proceed button.
- Datacenter: /RegionA01
- SSH Public Key:
id_rsa.pub
Click the Next button.
On the Management Cluster Settings page:
- Development, Instance Type: small
- Management Cluster Name: vsphere-mgmt
- API Server Load Balancer: /RegionA01/vm/photon-3-haproxy-v1.2.4+vmware.1
- Worker Node Instance Type: small
- Load Balancer Instance Type: small
- Development, Instance Type: small
Click the Next button.
On the Resources page:
- VM Folder: /RegionA01/vm/TKG-Mgmt
- Datastore: /RegionA01/datastore/map-vol
- Clusters, Hosts and Resource Pools: TKG-Mgmt
Click the Next button
On the Kubernetes Network page:
- Network Name: Dswitch-Management
- Leave Service and Pod CIDR values as-is
Click the Next button
On the OS Image page:
- OS Image: /RegionA01/vm/photon-3-kube-v1.18.2+vmware.1
Click the Next button
Click Review Configuration
Click Deploy Management Cluster
You can follow the progress in the browser:
When the management cluster is finished deploying, your vSphere inventory should look like the following:

You can run tkg get management-cluster
to see that your management cluster is deployed:
MANAGEMENT-CLUSTER-NAME CONTEXT-NAME
vsphere-mgmt * vsphere-mgmt-admin@vsphere-mgmt
Your .kube/config
file should be updated with the a new context for the management cluster and you can run kubectl commands against that cluster:
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* vsphere-mgmt-admin@vsphere-mgmt vsphere-mgmt vsphere-mgmt-admin
kubectl get nodes
NAME STATUS ROLES AGE VERSION
vsphere-mgmt-control-plane-hj4rs Ready master 6d19h v1.18.2+vmware.1
vsphere-mgmt-md-0-5b75dfc9cc-kfxvr Ready <none> 6d19h v1.18.2+vmware.1
CLI based install
You can also run the installation process entirely from the command line.
Run the tkg get management-cluster
command to create the .tkg/config.yaml
file.
Edit the .tkg/config.yaml
file such that it looks similar to the following (the paths noted are specific to my configuration and will differ based on where you run the tkg get management-cluster
command):
Note: You may need to increase the NODE_STARTUP_TIMEOUT
value.
cert-manager-timeout: 30m0s
overridesFolder: /home/ubuntu/.tkg/overrides
NODE_STARTUP_TIMEOUT: 20m
providers:
- name: cluster-api
url: /home/ubuntu/.tkg/providers/cluster-api/v0.3.5/core-components.yaml
type: CoreProvider
- name: aws
url: /home/ubuntu/.tkg/providers/infrastructure-aws/v0.5.3/infrastructure-components.yaml
type: InfrastructureProvider
- name: vsphere
url: /home/ubuntu/.tkg/providers/infrastructure-vsphere/v0.6.4/infrastructure-components.yaml
type: InfrastructureProvider
- name: tkg-service-vsphere
url: /home/ubuntu/.tkg/providers/infrastructure-tkg-service-vsphere/v1.0.0/unused.yaml
type: InfrastructureProvider
- name: kubeadm
url: /home/ubuntu/.tkg/providers/bootstrap-kubeadm/v0.3.5/bootstrap-components.yaml
type: BootstrapProvider
- name: kubeadm
url: /home/ubuntu/.tkg/providers/control-plane-kubeadm/v0.3.5/control-plane-components.yaml
type: ControlPlaneProvider
images:
all:
repository: gcr.io/kubernetes-development-244305/cluster-api
cert-manager:
repository: gcr.io/kubernetes-development-244305/cert-manager
tag: v0.11.0_vmware.1
release:
version: v1.1.0
VSPHERE_SERVER: vcsa-01a.corp.local
VSPHERE_DATACENTER: /RegionA01
VSPHERE_NETWORK: DSwitch-Management
VSPHERE_CONTROL_PLANE_DISK_GIB: "20"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "1"
VSPHERE_CONTROL_PLANE_MEM_MIB: "2028"
VSPHERE_WORKER_MEM_MIB: "2028"
VSPHERE_PASSWORD: <your password>
VSPHERE_DATASTORE: /RegionA01/datastore/map-vol
VSPHERE_HA_PROXY_NUM_CPUS: "1"
VSPHERE_USERNAME: administrator@vsphere.local
VSPHERE_WORKER_DISK_GIB: "20"
VSPHERE_WORKER_NUM_CPUS: "1"
VSPHERE_HA_PROXY_DISK_GIB: "20"
SERVICE_CIDR: 100.64.0.0/13
CLUSTER_CIDR: 100.96.0.0/11
VSPHERE_HAPROXY_TEMPLATE: /RegionA01/vm/photon-3-haproxy-v1.2.4+vmware.1
VSPHERE_RESOURCE_POOL: /RegionA01/host/RegionA01-MGMT/Resources/TKG-Mgmt
VSPHERE_FOLDER: /RegionA01/vm/TKG-Mgmt
VSPHERE_TEMPLATE: /RegionA01/vm/photon-3-kube-v1.18.2+vmware.1
VSPHERE_HA_PROXY_MEM_MIB: "2028"
VSPHERE_SSH_AUTHORIZED_KEY: <your public key contents>
Run the following command to create the management cluster:
tkg init --infrastructure vsphere --name vsphere-mgmt -p dev
Activity related to the management cluster creation will be streamed to your session.
When the management cluster is finished deploying, your vSphere inventory should look like the following:

You can run tkg get management-cluster
to see that your management cluster is deployed:
MANAGEMENT-CLUSTER-NAME CONTEXT-NAME
vsphere-mgmt * vsphere-mgmt-admin@vsphere-mgmt
Your .kube/config
file should be updated with the a new context for the management cluster and you can run kubectl commands against that cluster:
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* vsphere-mgmt-admin@vsphere-mgmt vsphere-mgmt vsphere-mgmt-admin
kubectl get nodes
NAME STATUS ROLES AGE VERSION
vsphere-mgmt-control-plane-hj4rs Ready master 6d19h v1.18.2+vmware.1
vsphere-mgmt-md-0-5b75dfc9cc-kfxvr Ready <none> 6d19h v1.18.2+vmware.1
Create a workload cluster
When the management cluster is finished deploying, open the .tkg\config.yaml
file and make the following changes to allow the workload cluster to be created with a larger worker node and under a different resource pool and folder from the management cluster:
VSPHERE_WORKER_DISK_GIB: "100"
VSPHERE_WORKER_NUM_CPUS: "4"
VSPHERE_WORKER_MEM_MIB: "8196"
VSPHERE_RESOURCE_POOL: /RegionA01/host/RegionA01-MGMT/Resources/TKG-Comp
VSPHERE_FOLDER: /RegionA01/vm/TKG-Comp
Then run the following command to create a workload cluster:
tkg create cluster vsphere-test -p dev -c 1 -w 1
Activity related to the workload cluster creation will be streamed to your session.
When the workload cluster is finished deploying, your vSphere inventory should look like the following:

You can run tkg get clusters
to see that your workload cluster is deployed:
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES
vsphere-test default running 1/1 1/1 v1.18.2+vmware.1
You will need to run the tkg get credentials vsphere-test
command to have a context created for the new cluster. Once done, you will be able to run kubectl commands against it.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* vsphere-mgmt-admin@vsphere-mgmt vsphere-mgmt vsphere-mgmt-admin
vsphere-test-admin@vsphere-test vsphere-test vsphere-test-admin
kubectl config use-context vsphere-test-admin@vsphere-test
kubectl get nodes
vsphere-test-control-plane-k55hq Ready master 6d19h v1.18.2+vmware.1
vsphere-test-md-0-78dc4b86-jflhn Ready <none> 6d19h v1.18.2+vmware.