I’ve been using vSphere with Tanzu for various purposes over the last few years but have never done a complete walkthrough of the installation. With the recent release of vSphere 8 and NSX 4.1, I decided now was a great time to take that on.
Table of Contents
My Environment
The following lays out some of the configuration in my deployment environment. Unless you plan on mimicking this setup exactly, you’ll want to replace any vSphere object names, IP Addresses, DNS names, etc… with your own.
- Management Network: 192.168.110.0/24 (no VLAN)
- ESXi Tunnel Endpoint (TEP) VLAN/network (used by ESXi hosts for creating overlay network): 130 / 192.168.130.0/24
- Edge TEP VLAN/network (used by the NSX edge VM for creating overlay network): 120 / 192.168.120.0/24
- Edge Uplink VLAN/network (used by the NSX edge VM for upstream communication with VyOS router): 140 / 192.168.140.0/24
- VLAN 140 BGP peer address on VyOS VM: 192.168.140.1 (AS 65002, will peer with AS 65013)
- Ingress/Egress IP Address range: 10.40.14.0/24 (split into /26 networks, used by NSX for creating load balanced addresses accessible outside of the Kubernetes clusters)
- Active Directory (AD) Domain: corp.vmw
- AD admin user granted vSphere privileges: vmwadmin
- vCenter Server FQDN: vcsa-01a.corp.vmw
- DNS Server: 192.168.110.10
- NTP Server 192.168.100.1
- vSphere Cluster name: RegionA01-Compute
- Shared Storage: vol1 (NFS datastore)
- Distributed Switch: Dswitch
- Distributed Switch Portgroup for management traffic: Management
vSphere Configuration
Outside of a simple vSphere installation (four-host cluster, HA, DRS, vDS, shared storage), there were only two additional changes to my basic configuration.
The first was to create two extra distributed port groups on the vDS:
Navigate to Inventory, Networking
Right-click DSwitch, Distributed Port Group, New Distributed Port Group
- Name=DPortGroup-EDGE-UPLINK
- (# ports=8, VLAN type=VLAN trunking, leave everything else at defaults)
- Note: This portgroup will be used for NSX edge uplink traffic
Right-click DSwitch, Distributed Port Group, New Distributed Port Group
- Name=DPortGroup-EDGE-TEP
- (# ports=8, VLAN type=VLAN trunking, leave everything else at defaults)
- Note: This portgroup will be used for NSX edge TEP traffic
The second was to create a storage policy based off of tag values:
- Navigate to Inventory, Datastores
- Click on the vol1 datastore
- In the Tags pane, click the Assign link
- Click the Add Tag link
- Name: k8s-storage
- Click the Create New Category link
- Category Name: k8s
- Click the Create button
-
- Category: k8s
- Click the Create button
- Select k8s-storage and click the Assign button
- Navigate to Policies and Profiles
- Click the Create link
- Name: k8s-policy
- Datastore Specific Rules: Enable tag based placement rules
- Tag Category: k8s
- Tags: k8s-storage(validate that the appropriate datastore is shown as compatible)
- Click the Finish button
NSX Installation and Configuration
Download the NSX 4.1 Manager appliance OVA (build 21332672)
Deploy the NSX Manager OVA
In the vSphere Client, deploy the NSX OVA to the RegionA01-Compute cluster with the following parameters:
Note: You will likely see a warning similar to the following near the beginning of the process on the Review Details page:

You can click either of the Ignore All links to move forward.
- VM name: nsxmgr-01a
- Size: Medium
- Datastore: vol1, thin provision
- Network: Management
- Grub password: <leave blank>
- Root Password: <set as appropriate>
- Admin password: <set as appropriate>
- Audit password: <set as appropriate>
- Hostname: nsxmgr-01a.corp.vmw
- Ipv4 gateway: 192.168.110.1
- Mgmt network ipv4 address: 192.168.110.42
- Mgmt network netmask: 255.255.255.0
- DNS server: 192.168.110.10
- Domain search list: corp.vmw
- NTP server: 192.168.100.1
- Enable both SSHs
Power on the nsxmgr-01a VM.
Configure the NSX Manager
- In a web browser, go to the IP address configured for the NSX Manager appliance and login with the admin user and password configured during deployment.
Note: You will have to accept the EULA before proceeding. There are also pages for joining the Customer Experience Improvement Program and quick overview of NSX that you can skip past (and click the “Don’t show this again” box on the NSX overview to keep it from coming back). - Navigate to System, Settings, Licenses.
- Click the Add License button and enter an appropriate NSX license key in the key field. Click the Add button.
- Navigate to System, Appliances and click the Set Virtual IP link. Enter
192.168.110.49
for the Virtual IPv4 Address and click the Save button. Note: Create a DNS record mapping nsxmanager.corp.vmw to 192.168.110.49. - I am using a custom wildcard certificate (signed by a universally trusted internal CA) for many components in my environment and like to replace any self-signed certificates with it. The next couple of certificate-related steps aren’t strictly necessary but are recommended.
- Navigate to System, Settings, Certificates and click the Import dropdown and then select Certificate. Complete the form per the following:
- Name: NSX-API-CERT
- Service Certificate: No
- Certificate Contents: <paste the contents of your certificate and CA certificate, in that order>
- Private Key: <paste the contents of the private key associated with the certificate>
- Passphrase: <set as appropriate or leave blank>
- Click the Save button.
- From a Linux system with network access to the NSX Manager, enter the commands similar to the following to instruct NSX Manager to use the imported certificate:
export CERTIFICATE_ID=$(curl --insecure -u admin:'<admin password>' -X GET "https://192.168.110.49/api/v1/trust-management/certificates" | jq -r '.results[] | select(.display_name == "NSX-API-CERT") | .id')
curl --insecure -u admin:'<admin password>' -X POST "https://192.168.110.49/api/v1/trust-management/certificates/$CERTIFICATE_ID?action=apply_certificate&service_type=MGMT_CLUSTER"
Be sure to replace 192.168.110.49 with the appropriate NSX Manager IP address and <admin password> with the correct NSX Manager admin password.
You’ll likely get no output if this is successful. You can test that the certificate was uploaded by navigating to the IP or FQDN (depending on how your certificate is configured) and checking for any certificate errors.
Configure Workload Management Networking
- Navigate to System, Fabric, Compute Manager and click the Add Compute Manager link. Complete the form per the following:
- Name: vcsa-01a (this is arbitrary but should be meaningful)
- Multi NSX: No
- FQDN: vcsa-01a.corp.vmw
- Username: administrator@vsphere.local (does not have to be administrator but had to have administrator privileges)
- Password: <set as appropriate>
- Create Service Account: Yes
- Enable Trust: Yes
- Access Level: Full Access
- Click the Add button. Note: If you get a warning about the VC thumbprint being missing, click the Add button (on the warning dialog) to add it. When the registration is complete, the Compute Managers page should look similar to the following:

- Navigate to Networking, Settings, Global Networking Config. Set MTU to as appropriate.
- Navigate to System, Fabric, Settings. Set Tunnel Endpoint MTU and Remote Tunnel Endpoint MTU as appropriate.
- Navigate to System, Fabric, Profiles, Uplink Profiles and click the Add Profile button. Complete the form per the following:
- Name: ESXI-UPLINK-PROFILE
- Teaming Policy: Failover Order
- Active Uplinks: uplink-1 (you can specify more than one)
- Transport VLAN: 130
- Click the Add button. Repeat the previous steps with the following parameters:
- Name: EDGE-UPLINK-PROFILE
- Teaming Policy: Failover Order
- Active Uplinks: uplink-1
- Transport VLAN: 120
- Click the Add button.
- Navigate to Networking, IP Management, IP Address Pools and click the Add IP Address Pool button. Complete the form per the following:
- Name: ESXI-TEP-IP-POOL
- Subnets: Click Set, Add Subnet, IP Ranges. Configure the following:
- IP Ranges: 192.168.130.2-192.168.130.20
- CIDR: 192.168.130.0/24
- Gateway IP: 192.168.130.1

- Click the Add button and then click the Apply button and then click the Save button.
- Repeat the previous steps with information appropriate for the Edge TEP IP range:
- Name: EDGE-TEP-IP-POOL
- IP Ranges: 192.168.120.2-192.168.120.20
- CIDR: 192.168.120.0/24
- Gateway IP: 192.168.120.1
- Navigate to System, Fabric, Hosts, Transport Node Profile and click the Add Transport Node Profile button. Complete the form per the following:
- Name: HOST-TRANSPORT-NODE-PROFILE
- Host Switch: Click Set, Add Host Switch
- vCenter Name: vcsa-01a
- Transport Zone: nsx-overlay-transportzone
- Uplink Profile: ESXI-UPLINK-PROFILE
- Select vDS: DSwitch
- IP Assignment: Use IP Pool
- IP Pool: ESXI-TEP-IP-POOL
- Advanced Configuration > Mode: Standard
- Teaming Policy Uplink Mapping > uplink-1: Uplink 1
- Click the Add button and then click the Apply button and then click the Save button.
Navigate to System, Fabric, Hosts, Clusters and expand and select the RegionA01-Compute cluster. You should see that all hosts are in a Not Configured state:

Click the Configure NSX link and select the HOST-TRANSPORT-NODE-PROFILE Transport Node Profile. Click the Save button. It will take several minutes but the NSX Configuration state of each ESXi host should change to Success and the Status should be Up.

You can see from the TEP IP Address column that the ESXi hosts have taken the first four IP addresses from the ESXI-TEP-IP-POOL IP Pool (192.168.130.2-192.168.130-20)
And while this is happening, you’ll see several tasks in the vSphere Client:

- Navigate to System, Fabric, Nodes, Edge Transport Nodes and click the Add Edge Node button. Complete the wizard per the following:
- Name: nsx-edge-1
- FQDN: nsx-edge-1.corp.vmw Note: create a corresponding DNS record
- Size: Large
- CLI Password: <set as appropriate>
- Allow SSH Login: Yes
- Root Password: <set as appropriate>
- Allow Root SSH Login: Yes
- Audit User Name: audit
- Audit Password: <set as appropriate>
- Compute Manager: vcsa-01a
- Cluster: RegionA01-Compute
- Datastore: vol1
- Management IP Type: Static
- Management IP: 192.168.110.91/24
- Default Gateway: 192.168.110.1
- Management Interface: Management
- Search Domain Names: corp.vmw
- DNS Servers: 192.168.110.10
- NTP Servers: 192.168.100.1
- Edge Switch Name: nvds1
- Transport Zone: nsx-overlay-transportzone
- Uplink Profile: EDGE-UPLINK-PROFILE
- IP Assignment (TEP): Use IP Pool
- IP Pool: EDGE-TEP-IP-POOL
- uplink-1: DPortGroup-EDGE-TEP
- (Click the Add Switch link)
Edge Switch Name: nvds2 - Transport Zone: nsx-vlan-transportzone
- Uplink Profile: EDGE-UPLINK-PROFILE
- uplink-1: DPortGroup-EDGE-UPLINK
- Click Finish.
Note: You will see an OVF being deployed in the vSphere Client while the Edge node is being deployed and configured.

When the edge deployment is completed, you should see the node in the list of Edge Nodes with a Configuration State of Success and a Node Status of Up.

- Navigate to System, Fabric, Nodes, Edge Clusters and click the Add Edge Cluster link. Complete the form per the following:
- Name: EDGE-CLUSTER-1
- Transport Nodes: nsx-edge-1
- Click the Add button.
- Navigate to Networking, Segments and click the Add Segment button. Complete the form per the following:
- Name: TIER-0-LS-UPLINK
- Connected Gateway: None
- Transport Zone: nsx-vlan-transportzone
- VLAN: 140
- Click the Save button. Click No when asked if you want to continue configuring the segment.
- Navigate to Networking, Tier-0 Gateways and click the Add Gateway button, select Tier-0 from the dropdown. Complete the form per the following:
- Name: Tier-0_VWT
- HA Mode: Active Standby
- Edge Cluster: EDGE-CLUSTER-1
- Failover: Non Preemptive
- Click Save and then click Yes when asked if you want to continue configuring the Tier-0 Gateway
- Interfaces > Click Set, and then click the Add Interface button. Complete the form per the following:
- Name: TIER-0_VWT-UPLINK1
- Type: External
- IP Address / Mask: 192.168.140.3/24
- Connected To(Segment): TIER-0-LS-UPLINK
- Edge Node: nsx-edge-1
- Click the Save button and then click the Close button.
- BGP (you could configure a static route if desired)
- BGP: On
- Local AS: 65013
- ECMP: On
- Multipath Relax: On
- Route Aggregation > Click Set and then click the Add Prefix button.
- Enter 10.40.14.0/24 for the CIDR.
- Click the Add button, then click the Apply button.
- Click the Save button.
- BGP Neighbors > Click Set and then click the Add BGP Neighbor button.
- IP Address: 192.168.140.1
- Remote AS Number: 65002
- Source Address: 192.168.140.3
- Click the Save button and then click the Close button
- Route Re-Distribution> click Set and then click the Add Route Re-Distribution button.
- Name: Tier-0_RRD
- Route Re-Distribution > Click Set and then select the following:
- Under Tier-0 Subnets: NAT IP, All Connected Interfaces & Segments
- Under Advertised Tier-1 Subnets:Â LB VIP, LB SNAT IP, NAT IP
- Click the Apply button, then click the Add button, then click the Apply button
- Click the Save button
- Click the Close Editing button.
Configure Workload Management
In the vSphere Client, navigate to Workload Management. You should see a page similar to the following if you’ve never visited this page before:

Click the Get Started button
If everything has been configured properly up to this point, you should be presented with a page similar to the following:

The “Supports NSX” next to the vCenter Server name means that NSX is configured and Workload Management is capable of using it. Since we’re using NSX and only have the one vCenter Server, there is nothing to configure here. Click the Next button.

We’re only using a single zone so you can click the Cluster Deployment button.

Again, if everything is working as expected, the single cluster in the vCenter Server inventory should show up as compatible. Set the Supervisor Name to svc1 and select the RegionA01-Compute cluster. You can leave the vSphere Zone Name field blank. Click the Next button.

Set all three policies to k8s-policy. Click the Next button.

Configure the following parameters on the Management Network page:
- Network Mode: Static
- Network: Management
- Starting IP Address: 192.168.110.101
Note: This IP address and the next four IP addresses after it will be reserved. The IP address specified will be a VIP used across of the supervisor cluster control plane nodes. The next three addresses will be directly assigned to each node. The last address is held for performing rolling upgrades. - Subnet Mask: 255.255.255.0
- Gateway: 192.168.110.1
- DNS Server(s): 192.168.110.10
- DNS Search Domains(s): corp.vmw
- NTP Server(s): 192.168.100.1
The completed configuration should look like the following:

Click the Next button.

Configure the following parameters on the Workload Network page:
- vSphere Distributed Switch: DSwitch
- Edge Cluster: EDGE-CLUSTER-1
- DNS Server(s): 192.168.110.10
- Tier-0 Gateway: Tier-0_VWT
- NAT Mode: Enabled
- Subnet Prefix: /28
- Namespace Network: 10.244.0.0/20
- Service CIDR: 10.96.0.0/23
- Ingress CIDR:10.40.14.64/26
- Egress CIDR: 10.40.14.128/26
The completed configuration should look like the following:

Click the Next button.

Leave the Supervisor Control Plane Size as Small but set the API Server DNS Name(s) to wcp.corp.vmw.
Click the Finish button.
Monitor the WCP Installation
You’ll see a tremendous number of tasks getting started almost immediately:

On the Workload Management page, you’ll also see that a Supervisor Cluster has been created and is in a Configuring state:

In the Inventory, Hosts and Clusters view, you will see that a resource pool named Namespaces has been created and there are three SupervisorControlPlane VMs present:

In the Workload Management view, you can click the (view) link in the Config Status column to get more detail about what is happening with the deployment:

You can expand any of the sections to get even more detail:



You’ll likely notice the following banner (if it wasn’t already there) fairly early in the process:

A license needs to be applied to the supervisor cluster but you can wait until the deployment is finished to do that.
It’s not unusual to see errors during the deployment on the Workload Management page:


These (and others) are expected as the deployment is not completed but various health checks are already in play. You can ignore these until the deployment is completed or at least much farther along.
If anything goes seriously wrong during the deployment, you can check the /var/log/vmware/wcp/wcpsvc.log
file on the vCenter Server appliance. You will likely want to filter out debug level messages as the default logging level is extremely verbose.
A Content Library is created in vSphere that will have the various node image OVAs used for creating Tanzu Kubernetes Grid Service clusters


When the deployment is finished, you should see that the Config Status and Host Config Status of the supervisor cluster are both Running.

You can also see from this page that the supervisor cluster API endpoint (Control Plane Node Address) is 10.40.14.66.
In NSX Manager, you will see various new objects created:
Segment:

The subnet in use on this segment, 10.244.0.1/28, is the IP range specified (also the default) for the namespace network when configuring Workload Management.
Note: The name, domain-c1006… corresponds to the MOID of the cluster in vCenter Server.
If you check the Inventory, Networking view in the vSphere Client, you will see that there is now a distributed portgroup that aligns with this segment:

Tier-1 Gateway:

The Tier-1 Gateway is linked to the Tier-0 Gateway that was created earlier, and is also linked to the previously noted segment:

NAT Rules:

The main things to take away from these rules:
- All traffic between internal components (10.244.0.0/20) will not be NAT’d.
- All traffic from an internal component (10.244.0.0/20) with a destination of the ingress range of IP addresses (10.40.14.64/26) will not be NAT’d.
- All other traffic (basically anything headed out of the cluster) will be NAT’d such that the source IP address is translated to 10.40.14.129 (the first IP address in the previously configured egress range).
Load Balancers:

The first load balancer, whose name starting with clusterip, is used for creating the various ClusterIP addresses in the 10.96.0.0/23 range specified for services during Workload Management configuration
(sample)

The second load balancer, whose name starts with domain-c1006, is used for providing external access to supervisor cluster resources (in the 10.40.14.64/26 range specified for ingress)

IP Address Pools:

The first pool, with 10-40-14-129 in the name, corresponds to the egress range of IP addresses

You can see that only 10.40.14.129 has been allocated (as the translated NAT IP address for outbound traffic)

The second pool, with 10-40-14-65 in the name, corresponds to the ingress range of IP addresses

You can see that 10.40.14.65 (external load balancer address for the CSI controller endpoint) and 10.40.14.66 (external load balancer address for the supervisor cluster API endpoint) have been allocated.
One very useful view in NSX is the Inventory, Containers view. You can see just about everything that was created during Workload Management deployment here.

Apply a License
The last step of the basic configuration for Workload Management is to apply a valid license.
Navigate to Administration, Licensing, Licenses. Click on the Assets tab and then on the Supervisors sub-tab.

You can see that the recently created supervisor cluster is using an evaluation license. Select this supervisor and click the Assign License link. Pick and existing license or enter a new license key.
Access the Supervisor Cluster
You should have seen that the IP address assigned to the supervisor cluster API endpoint (Control Plane Node Address) was 10.40.40.166. You should create a DNS record that maps wcp.corp.vmw (set as the API Server DNS Name(s) value) to this IP address. If you open a browser and navigate to https://wcp.corp.vmw (or https://10.40.14.66) you should see a page similar to the following:

Use the Select Operating System dropdown to set your target OS and then download the appropriate CLI plugin. You should have a file named vsphere-plugin.zip
which has the kubectl
and kubectl-vsphere
executable files in it. Copy these to the system where you will be running kubectl
commands (and make executable if on Linux).
You can access the supervisor cluster as if it were a normal Kubernetes cluster via kubectl
commands but you must use the kubectl-vsphere
plugin to login first. The following is an example of what this might look like:
kubectl vsphere login --server wcp.corp.vmw
This is the bare minimum you can supply and you will be prompted for a username and password. You can use the -u
switch to supply a username (vSphere SSO or AD user if joined to AD, must have admin privileges) or --insecure-skip-tls-verify
if the certificate in use by your vCenter Server is not trusted. If you will be logging in a lot and aren’t terribly security conscious, you can specify KUBECTL_VSPHERE_PASSWORD
environment variable to avoid entering your password repeatedly. With a successful login, you should see output similar to the following:
Logged in successfully.
You have access to the following contexts:
wcp.corp.vmw
If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.
To change context, use `kubectl config use-context <workload name>`
You could now use the kubectl config get-contexts
to see that you have a new Kubernetes context created:
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* wcp.corp.vmw wcp.corp.vmw wcp:wcp.corp.vmw:vmwadmin@corp.vmw
You can investigate the cluster structure now:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
42386121dd9e706b302805ade87d6ebb Ready control-plane,master 3h49m v1.24.9+vmware.wcp.1 10.244.0.2 <none> VMware Photon OS/Linux 4.19.261-1.ph3-esx containerd://1.5.9
4238859ae37a22f57e1eed15e07f2d08 Ready control-plane,master 3h39m v1.24.9+vmware.wcp.1 10.244.0.4 <none> VMware Photon OS/Linux 4.19.261-1.ph3-esx containerd://1.5.9
4238f70acc3eceb87249c7901d7ce833 Ready control-plane,master 3h39m v1.24.9+vmware.wcp.1 10.244.0.3 <none> VMware Photon OS/Linux 4.19.261-1.ph3-esx containerd://1.5.9
esx-01a.corp.vmw Ready agent 3h28m v1.24.4-sph-6103760 192.168.110.51 <none> <unknown> <unknown> <unknown>
esx-02a.corp.vmw Ready agent 3h31m v1.24.4-sph-6103760 192.168.110.52 <none> <unknown> <unknown> <unknown>
esx-03a.corp.vmw Ready agent 3h35m v1.24.4-sph-6103760 192.168.110.53 <none> <unknown> <unknown> <unknown>
esx-04a.corp.vmw Ready agent 3h33m v1.24.4-sph-6103760 192.168.110.54 <none> <unknown> <unknown> <unknown>
You can see that each of the control plane nodes has an IP address in the 10.244.0.0/20 range which was specified for Namespace network. The worker nodes (labeled with the agent role here) are the four ESXi hosts in the cluster. They are running a spherelet process that does the work of a kubelet in a standard Kubernetes cluster.
Using kubectl get po you will find that there are over 100 pods running across multiple namespaces All of these are also using IP addresses in the 10.244.0.0/20 range.
kubectl get po
kubectl get po -o wide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-56dc8bc67b-7kdxl 1/1 Running 0 3h47m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
kube-system coredns-56dc8bc67b-grjxg 1/1 Running 6 (3h45m ago) 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
kube-system coredns-56dc8bc67b-mmdsw 1/1 Running 0 3h50m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
kube-system docker-registry-42386121dd9e706b302805ade87d6ebb 1/1 Running 0 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
kube-system docker-registry-4238859ae37a22f57e1eed15e07f2d08 1/1 Running 1 3h41m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
kube-system docker-registry-4238f70acc3eceb87249c7901d7ce833 1/1 Running 0 3h41m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
kube-system etcd-42386121dd9e706b302805ade87d6ebb 1/1 Running 0 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
kube-system etcd-4238859ae37a22f57e1eed15e07f2d08 1/1 Running 0 3h41m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
kube-system etcd-4238f70acc3eceb87249c7901d7ce833 1/1 Running 0 3h40m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
kube-system kube-apiserver-42386121dd9e706b302805ade87d6ebb 1/1 Running 2 (3h37m ago) 3h43m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
kube-system kube-apiserver-4238859ae37a22f57e1eed15e07f2d08 1/1 Running 2 (3h37m ago) 3h37m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
kube-system kube-apiserver-4238f70acc3eceb87249c7901d7ce833 1/1 Running 1 (3h37m ago) 3h38m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
kube-system kube-controller-manager-42386121dd9e706b302805ade87d6ebb 1/1 Running 5 (3h37m ago) 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
kube-system kube-controller-manager-4238859ae37a22f57e1eed15e07f2d08 1/1 Running 0 3h41m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
kube-system kube-controller-manager-4238f70acc3eceb87249c7901d7ce833 1/1 Running 0 3h40m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
kube-system kube-proxy-fwf8d 1/1 Running 0 3h38m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
kube-system kube-proxy-lp7vq 1/1 Running 0 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
kube-system kube-proxy-qqp6f 1/1 Running 0 3h37m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
kube-system kube-scheduler-42386121dd9e706b302805ade87d6ebb 2/2 Running 5 (3h37m ago) 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
kube-system kube-scheduler-4238859ae37a22f57e1eed15e07f2d08 2/2 Running 2 (3h37m ago) 3h41m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
kube-system kube-scheduler-4238f70acc3eceb87249c7901d7ce833 2/2 Running 0 3h41m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
kube-system kubectl-plugin-vsphere-42386121dd9e706b302805ade87d6ebb 1/1 Running 5 (3h34m ago) 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
kube-system kubectl-plugin-vsphere-4238859ae37a22f57e1eed15e07f2d08 1/1 Running 3 (3h37m ago) 3h41m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
kube-system kubectl-plugin-vsphere-4238f70acc3eceb87249c7901d7ce833 1/1 Running 3 (3h37m ago) 3h40m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
kube-system wcp-authproxy-42386121dd9e706b302805ade87d6ebb 1/1 Running 0 3h33m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
kube-system wcp-authproxy-4238859ae37a22f57e1eed15e07f2d08 1/1 Running 0 3h40m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
kube-system wcp-authproxy-4238f70acc3eceb87249c7901d7ce833 1/1 Running 0 3h40m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
kube-system wcp-fip-42386121dd9e706b302805ade87d6ebb 1/1 Running 0 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
kube-system wcp-fip-4238859ae37a22f57e1eed15e07f2d08 1/1 Running 1 3h41m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
kube-system wcp-fip-4238f70acc3eceb87249c7901d7ce833 1/1 Running 0 3h41m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
svc-tmc-c1006 tmc-agent-installer-27981925-q8qqj 0/1 Completed 0 13s 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-appplatform-operator-system kapp-controller-6d864cc846-n7zjc 2/2 Running 0 3h36m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-appplatform-operator-system vmware-system-appplatform-operator-mgr-0 1/1 Running 0 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-appplatform-operator-system vmware-system-psp-operator-mgr-944548874-dkxkw 1/1 Running 11 (3h37m ago) 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-capw capi-controller-manager-78f5687c59-d492k 2/2 Running 0 3h46m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-capw capi-controller-manager-78f5687c59-jtdkq 2/2 Running 2 (3h42m ago) 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-capw capi-controller-manager-78f5687c59-tc2c4 2/2 Running 0 3h46m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-capw capi-kubeadm-bootstrap-controller-manager-5f84f65bff-b2gtw 2/2 Running 0 3h46m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-capw capi-kubeadm-bootstrap-controller-manager-5f84f65bff-dlndm 2/2 Running 0 3h46m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-capw capi-kubeadm-bootstrap-controller-manager-5f84f65bff-sf7lq 2/2 Running 2 (3h42m ago) 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-capw capi-kubeadm-control-plane-controller-manager-5854b9584d-dtrmh 2/2 Running 0 3h46m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-capw capi-kubeadm-control-plane-controller-manager-5854b9584d-g9w9d 2/2 Running 0 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-capw capi-kubeadm-control-plane-controller-manager-5854b9584d-x9kx6 2/2 Running 0 3h46m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-capw capv-controller-manager-6f6b49dbbc-49hhd 1/1 Running 0 3h46m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-capw capv-controller-manager-6f6b49dbbc-5xd46 1/1 Running 0 3h46m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-capw capv-controller-manager-6f6b49dbbc-xm4wh 1/1 Running 3 (3h37m ago) 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-capw capw-controller-manager-6bcd89fbc4-6hrbr 2/2 Running 0 3h46m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-capw capw-controller-manager-6bcd89fbc4-gzddb 2/2 Running 3 (3h37m ago) 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-capw capw-controller-manager-6bcd89fbc4-z7v49 2/2 Running 0 3h46m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-capw capw-webhook-6c679d5997-8ptc5 2/2 Running 0 3h46m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-capw capw-webhook-6c679d5997-flrm8 2/2 Running 0 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-capw capw-webhook-6c679d5997-wtcz6 2/2 Running 0 3h46m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-cert-manager cert-manager-cainjector-8549f4c675-cbzln 1/1 Running 0 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-cert-manager cert-manager-d5b6fdc4d-8n8nw 1/1 Running 0 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-cert-manager cert-manager-webhook-7b85bcccf6-zpcq4 1/1 Running 0 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-csi vsphere-csi-controller-86c5b9b557-tbp4l 6/6 Running 0 3h46m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-csi vsphere-csi-controller-86c5b9b557-wvzgl 6/6 Running 21 (3h37m ago) 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-csi vsphere-csi-controller-86c5b9b557-xv7hj 6/6 Running 0 3h46m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-csi vsphere-csi-webhook-86d49dc645-5wvk5 1/1 Running 0 3h46m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-csi vsphere-csi-webhook-86d49dc645-mjc7z 1/1 Running 0 3h46m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-csi vsphere-csi-webhook-86d49dc645-zdw5t 1/1 Running 0 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-kubeimage image-controller-854986c7bb-mwbxn 1/1 Running 1 (3h45m ago) 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-license-operator vmware-system-license-operator-controller-manager-66fdc64ff5gql 1/1 Running 0 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-license-operator vmware-system-license-operator-controller-manager-66fdc64ffk26g 1/1 Running 0 3h46m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-license-operator vmware-system-license-operator-controller-manager-66fdc64fl7s2w 1/1 Running 0 3h46m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-logging fluentbit-dwsrl 1/1 Running 0 3h41m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-logging fluentbit-n66dg 1/1 Running 0 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-logging fluentbit-rv7hz 1/1 Running 0 3h41m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-nsop vmware-system-nsop-controller-manager-54cf7966c4-7zsq4 1/1 Running 0 3h45m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-nsop vmware-system-nsop-controller-manager-54cf7966c4-qk9st 1/1 Running 0 3h45m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-nsop vmware-system-nsop-controller-manager-54cf7966c4-zms6d 1/1 Running 2 (3h42m ago) 3h45m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-nsx nsx-ncp-588d784db7-clgg5 2/2 Running 3 (3h37m ago) 3h50m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-pinniped pinniped-concierge-7448d7df5c-rpqpg 1/1 Running 0 3h34m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-pinniped pinniped-concierge-7448d7df5c-sn2f5 1/1 Running 0 3h34m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-pinniped pinniped-concierge-7448d7df5c-wn82f 1/1 Running 0 3h34m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-pinniped pinniped-concierge-kube-cert-agent-6d954ff597-slnkg 1/1 Running 0 3h34m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-pinniped pinniped-supervisor-5cd47575bf-gqlhk 1/1 Running 0 3h34m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-pinniped pinniped-supervisor-5cd47575bf-xw5c4 1/1 Running 0 3h34m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-pinniped pinniped-supervisor-5cd47575bf-zd7mn 1/1 Running 0 3h34m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-registry vmware-registry-controller-manager-646c7db74b-78vlv 2/2 Running 0 3h50m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-tkg masterproxy-tkgs-plugin-cqxdk 1/1 Running 0 3h33m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-tkg masterproxy-tkgs-plugin-wkbw5 1/1 Running 0 3h33m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-tkg masterproxy-tkgs-plugin-xfwtv 1/1 Running 0 3h33m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-tkg tanzu-addons-controller-manager-56ff75ffb4-vqrks 1/1 Running 0 3h35m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-tkg tanzu-auth-controller-manager-78b4797cf8-bp6mf 1/1 Running 0 3h35m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-tkg tanzu-capabilities-controller-manager-7958449676-tbzxl 1/1 Running 0 3h35m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-tkg tanzu-featuregates-controller-manager-7b5cc5c55b-rct55 1/1 Running 0 3h35m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-tkg tkgs-plugin-server-8cd7f688d-5mzmb 1/1 Running 0 3h34m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-tkg tkgs-plugin-server-8cd7f688d-v5x2v 1/1 Running 0 3h34m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-tkg tkgs-plugin-server-8cd7f688d-wc9kt 1/1 Running 0 3h34m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-tkg tkr-conversion-webhook-manager-6747484b47-45qp6 1/1 Running 0 3h35m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-tkg tkr-resolver-cluster-webhook-manager-7b64954d4-kbkvd 1/1 Running 0 3h35m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-tkg tkr-status-controller-manager-59967796b9-txcmb 1/1 Running 0 3h35m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-tkg vmware-system-tkg-controller-manager-85dd9fb586-82tnv 2/2 Running 0 3h34m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-tkg vmware-system-tkg-controller-manager-85dd9fb586-nw9s9 2/2 Running 0 3h34m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-tkg vmware-system-tkg-controller-manager-85dd9fb586-vmppf 2/2 Running 0 3h34m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-tkg vmware-system-tkg-state-metrics-7b6fbd6fcb-47hfc 2/2 Running 0 3h34m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-tkg vmware-system-tkg-state-metrics-7b6fbd6fcb-bs67l 2/2 Running 0 3h34m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-tkg vmware-system-tkg-state-metrics-7b6fbd6fcb-wfsxz 2/2 Running 0 3h34m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-tkg vmware-system-tkg-webhook-6fbb5bd8fc-c4dgx 2/2 Running 0 3h34m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-tkg vmware-system-tkg-webhook-6fbb5bd8fc-fj6kh 2/2 Running 0 3h34m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-tkg vmware-system-tkg-webhook-6fbb5bd8fc-wn9cl 2/2 Running 0 3h34m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-ucs upgrade-compatibility-service-7b67b79876-c9bm7 1/1 Running 0 3h47m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-ucs upgrade-compatibility-service-7b67b79876-lsqsj 1/1 Running 0 3h47m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-ucs upgrade-compatibility-service-7b67b79876-psrsg 1/1 Running 0 3h47m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-vmop vmware-system-vmop-controller-manager-75dc46d468-2r4kv 2/2 Running 0 3h46m 10.244.0.3 4238f70acc3eceb87249c7901d7ce833
vmware-system-vmop vmware-system-vmop-controller-manager-75dc46d468-7n9m4 2/2 Running 4 (3h37m ago) 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
vmware-system-vmop vmware-system-vmop-controller-manager-75dc46d468-q9qkn 2/2 Running 0 3h46m 10.244.0.4 4238859ae37a22f57e1eed15e07f2d08
vmware-system-vmop vmware-system-vmop-hostvalidator-7b7ccfc697-ws2nj 1/1 Running 0 3h46m 10.244.0.2 42386121dd9e706b302805ade87d6ebb
If you poke around long enough, you will start to see things that don’t work the in the same fashion as a standard Kubernetes cluster:
kubectl get secret -A
Error from server (Forbidden): secrets is forbidden: User "sso:vmwadmin@corp.vmw" cannot list resource "secrets" in API group "" at the cluster scope
This is expected behavior and is a guardrail to help prevent an over-eager administrator from potentially damaging the supervisor cluster’s functionality.
I’ll have a few more posts coming soon that go into using vSphere 8 with Tanzu.
How are the workload network settings identified to NSX-T?
How are the workload network settings identified in NSX-T?
I mean, how are the namespace, ingress, and egress CIDRs identified in NSX-T?
Namespace creation results in a new T1 deployed in NSX. You can see this in a screenshot in this post under the “Monitor the WCP Installation” section (look for “Tier-1 Gateway:”)
The ingress and egress CIDRs are identified as IP Address Pools. You can see this in a screenshot in this post under the “Monitor the WCP Installation” section (look for “IP Address Pools:”).
Could you please share your VyOS configuration? I’m trying with pfSense (VLAN 140), Static routing 10.40.14.0/24 -> next hop set to 192.168.140.1. btw, i’m using Nested ESXi 8.
Thank you in advance.
Hey Alex. I’ve put a copy of my VyOS config up at https://little-stuff.com/vyos.bkp.2023-05-04. I’m using nested ESXi 8 as well so our environments are likely fairly similar.
I am in trouble in deployment of contour and Harbor image registry, do you have any idea how to fix it ?
When I install contour its getting error-
Reason: ReconcileFailed. Message: I0816 11:42:10.151376 9396 request.go:601] Waited for 1.042636998s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/login.concierge.pinniped.dev/v1alpha1 kapp: Error: waiting on reconcile deployment/contour (apps/v1) namespace: svc-contour-domain-c11: Finished unsuccessfully (Deployment is not progressing: ProgressDeadlineExceeded (message: ReplicaSet “contour-7f96d8c4dc” has timed out progressing.)).
Unfortunately, the error is pretty vague. I would start by looking at the events in the namespace where you’re configuring Contour to see if there are any more details. You could also try doing a describe on the Contour packageinstalls object.
I have another post all around installing packages, https://little-stuff.com/2023/05/18/installing-packages-to-a-tkgs-cluster-in-vsphere-8-with-tanzu/, that may help. And there is also the official docs for installing the Contour package at https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/2.3/using-tkg/workload-packages-contour.html.