Following up on my previous post, vSphere 8 with Tanzu and NSX Installation, you’ll find that one of the first things you can do with Workload Management deployed is to create and use Namespaces in the Supervisor cluster.
Note: Please refer to the My Environment section in my previous post, vSphere 8 with Tanzu and NSX Installation, for the details of my configuration and hints on how you might need to alter some of the commands I’m running.
Once your supervisor cluster is up and running, you should see the following on the Namespaces tab:

A supervisor cluster namespace is similar to a standard Kubernetes namespace but not exactly the same. You can run workloads here directly or you can create entire Tanzu Kubernetes Clusters in them for running your workloads. Some of the services you can run in the supervisor cluster (Harbor and Velero specifically) will create and use their own namespaces.
You might try to create a namespace via kubectl create ns
but this will fail as there are other expectations beyond a standard Kubernetes namespace that can only be configured in the vSphere Client.
kubectl create ns cjl
Error from server (Forbidden): namespaces is forbidden: User "sso:vmwadmin@corp.vmw" cannot create resource "namespaces" in API group "" at the cluster scope
From the Namespaces tab on the Workload Management page, you can click the Create Namespace button to create your first supervisor cluster namespace.

You must select a supervisor cluster (svc1 in this example) and supply a name for the namespace.

You can see that once you have those two items specified, a new option is available: Override Supervisor network settings. If you click this you will be able to configure this namespace with slightly or wildly different settings than were configured during Workload Management deployment.

If you don’t need different networking, leave the Override Supervisor network settings box unchecked. Click the Create button when finished.

You should also see this namespace via the kubectl get namespace
command as well.
kubectl get ns
NAME STATUS AGE
default Active 4h46m
kube-node-lease Active 4h47m
kube-public Active 4h47m
kube-system Active 4h47m
svc-tmc-c1006 Active 4h40m
tanzu-cli-system Active 4h31m
test Active 5m14s
vmware-system-appplatform-operator-system Active 4h46m
vmware-system-capw Active 4h41m
vmware-system-cert-manager Active 4h44m
vmware-system-csi Active 4h46m
vmware-system-kubeimage Active 4h44m
vmware-system-license-operator Active 4h40m
vmware-system-logging Active 4h45m
vmware-system-nsop Active 4h40m
vmware-system-nsx Active 4h46m
vmware-system-pinniped Active 4h28m
vmware-system-pkgs Active 4h31m
vmware-system-registry Active 4h44m
vmware-system-supervisor-services Active 4h45m
vmware-system-tkg Active 4h31m
vmware-system-ucs Active 4h41m
vmware-system-vmop Active 4h40m
By default, this namespace has no storage assigned and no users have any privileges in it.

You can configure permissions by clicking the Add Permissions button on the Permissions pane or clicking the Permissions tab at the top.

I always configure administrator@vpshere.local as an owner.

You can configure SSO and AD users/groups (if joined to AD) with permissions on the namespace.

You can assign storage by clicking the Add Storage button on the Storage pane or by clicking on the Storage tab at the top.

In this example, k8-spolicy is the only viable choice…select it and click the Ok button.
You can also configure limits on the compute and storage resources available to the namespace (I usually don’t but I’m also not running much).

You can now re-login and see that a new context is available.
kubectl vsphere login --server wcp.corp.vmw -u vmwadmin@corp.vmw
Logged in successfully.
You have access to the following contexts:
test
wcp.corp.vmw
If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.
To change context, use `kubectl config use-context <workload name>`
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
test 10.40.14.66 wcp:10.40.14.66:vmwadmin@corp.vmw test
* wcp.corp.vmw wcp.corp.vmw wcp:wcp.corp.vmw:vmwadmin@corp.vmw
kubectl config use-context test
If you want to deploy Tanzu Kubernetes Clusters to this namespace you will also need to assign one or more VM classes and a content library. You can see on the VM Service pane that there are zero of both of these by default.
The content library was created automatically during Workload Management deployment. You can click the Add Content Library button to associate it with the namespace.

VM Classes are a simple description of how a Kubernetes node in a Tanzu Kubernetes Cluster will be configured from a compute/storage perspective. You can associate one or more VM Classes with a namespace by clicking the Add VM Class button.

I usually select all classes to allow for the most flexibility when creating Tanzu Kubernetes Clusters. If you click the Manage VM Classes link, it will take you to the Services, VM Classes page where you can add, edit, or delete VM Classes.


With VM Classes and a Content Library associated with the namespace, the VM Service pane should look similar to the following:

Before associating one or more VM Classes with the namespace, the kubectl get virtualmachineclassbindings
command would have returned no results.
kubectl get virtualmachineclassbindings
No resources found in test namespace.
Afterwards, all of the associated VM Classes are visible.
kubectl get virtualmachineclassbindings
NAME AGE
best-effort-2xlarge 3m57s
best-effort-4xlarge 3m57s
best-effort-8xlarge 3m56s
best-effort-large 3m58s
best-effort-medium 3m56s
best-effort-small 3m57s
best-effort-xlarge 3m57s
best-effort-xsmall 3m57s
guaranteed-2xlarge 3m57s
guaranteed-4xlarge 3m56s
guaranteed-8xlarge 3m57s
guaranteed-large 3m56s
guaranteed-medium 3m56s
guaranteed-small 3m57s
guaranteed-xlarge 3m57s
guaranteed-xsmall 3m57s
Additionally, you can see the contents of the associated Content Library via the kubectl get tanzukubernetesreleases
command.
kubectl get tkr
NAME VERSION READY COMPATIBLE CREATED
v1.16.12---vmware.1-tkg.1.da7afe7 v1.16.12+vmware.1-tkg.1.da7afe7 False False 4h40m
v1.16.14---vmware.1-tkg.1.ada4837 v1.16.14+vmware.1-tkg.1.ada4837 False False 4h40m
v1.16.8---vmware.1-tkg.3.60d2ffd v1.16.8+vmware.1-tkg.3.60d2ffd False False 4h40m
v1.17.11---vmware.1-tkg.1.15f1e18 v1.17.11+vmware.1-tkg.1.15f1e18 False False 4h41m
v1.17.11---vmware.1-tkg.2.ad3d374 v1.17.11+vmware.1-tkg.2.ad3d374 False False 4h40m
v1.17.13---vmware.1-tkg.2.2c133ed v1.17.13+vmware.1-tkg.2.2c133ed False False 4h40m
v1.17.17---vmware.1-tkg.1.d44d45a v1.17.17+vmware.1-tkg.1.d44d45a False False 4h39m
v1.17.7---vmware.1-tkg.1.154236c v1.17.7+vmware.1-tkg.1.154236c False False 4h41m
v1.17.8---vmware.1-tkg.1.5417466 v1.17.8+vmware.1-tkg.1.5417466 False False 4h40m
v1.18.10---vmware.1-tkg.1.3a6cd48 v1.18.10+vmware.1-tkg.1.3a6cd48 False False 4h41m
v1.18.15---vmware.1-tkg.1.600e412 v1.18.15+vmware.1-tkg.1.600e412 False False 4h41m
v1.18.15---vmware.1-tkg.2.ebf6117 v1.18.15+vmware.1-tkg.2.ebf6117 False False 4h40m
v1.18.19---vmware.1-tkg.1.17af790 v1.18.19+vmware.1-tkg.1.17af790 False False 4h40m
v1.18.5---vmware.1-tkg.1.c40d30d v1.18.5+vmware.1-tkg.1.c40d30d False False 4h41m
v1.19.11---vmware.1-tkg.1.9d9b236 v1.19.11+vmware.1-tkg.1.9d9b236 False False 4h40m
v1.19.14---vmware.1-tkg.1.8753786 v1.19.14+vmware.1-tkg.1.8753786 False False 4h40m
v1.19.16---vmware.1-tkg.1.df910e2 v1.19.16+vmware.1-tkg.1.df910e2 False False 4h41m
v1.19.7---vmware.1-tkg.1.fc82c41 v1.19.7+vmware.1-tkg.1.fc82c41 False False 4h39m
v1.19.7---vmware.1-tkg.2.f52f85a v1.19.7+vmware.1-tkg.2.f52f85a False False 4h39m
v1.20.12---vmware.1-tkg.1.b9a42f3 v1.20.12+vmware.1-tkg.1.b9a42f3 False False 4h40m
v1.20.2---vmware.1-tkg.1.1d4f79a v1.20.2+vmware.1-tkg.1.1d4f79a False False 4h41m
v1.20.2---vmware.1-tkg.2.3e10706 v1.20.2+vmware.1-tkg.2.3e10706 False False 4h40m
v1.20.7---vmware.1-tkg.1.7fb9067 v1.20.7+vmware.1-tkg.1.7fb9067 False False 4h41m
v1.20.8---vmware.1-tkg.2 v1.20.8+vmware.1-tkg.2 False False 4h41m
v1.20.9---vmware.1-tkg.1.a4cee5b v1.20.9+vmware.1-tkg.1.a4cee5b False False 4h38m
v1.21.2---vmware.1-tkg.1.ee25d55 v1.21.2+vmware.1-tkg.1.ee25d55 True True 4h40m
v1.21.6---vmware.1-tkg.1 v1.21.6+vmware.1-tkg.1 True True 4h41m
v1.21.6---vmware.1-tkg.1.b3d708a v1.21.6+vmware.1-tkg.1.b3d708a True True 4h41m
v1.22.9---vmware.1-tkg.1.cc71bc8 v1.22.9+vmware.1-tkg.1.cc71bc8 True True 4h40m
v1.23.8---vmware.2-tkg.2-zshippable v1.23.8+vmware.2-tkg.2-zshippable True True 4h40m
v1.23.8---vmware.3-tkg.1 v1.23.8+vmware.3-tkg.1 True True 4h39m
You can run workloads in a supervisor cluster namespace (but it might be preferable to run them in a Tanzu Kubernetes Cluster). I have a slightly modified with-pv.yaml
(nginx) that is part of the Velero example deployments (I had to remove any reference to namespaces since these cannot be created here).
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-logs
labels:
app: nginx
spec:
# Optional:
storageClassName: k8s-policy
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
pre.hook.backup.velero.io/container: fsfreeze
pre.hook.backup.velero.io/command: '["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]'
post.hook.backup.velero.io/container: fsfreeze
post.hook.backup.velero.io/command: '["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]'
spec:
volumes:
- name: nginx-logs
persistentVolumeClaim:
claimName: nginx-logs
containers:
- image: nginx:1.17.6
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-logs
readOnly: false
- image: ubuntu:bionic
name: fsfreeze
securityContext:
privileged: true
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-logs
readOnly: false
command:
- "/bin/bash"
- "-c"
- "sleep infinity"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: my-nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
An inspection of this shows that a pod with a persistent volume will be created, and a service of type LoadBalancer. This is great as it will show that:
- the vSphere CSI (Container Storage Interface) driver is working an communication with vSphere to provision storage
- That NSX is able to configure an additional load balancer.
As soon as kubectl apply -f with-pv.yaml
is run, you’ll see loads of activity in the vSphere client

You can also see the expected resources created in the namespace.
kubectl get po,pv,pvc,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-deployment-555bf4ff65-sfvlh 2/2 Running 0 2m40s 10.244.0.34 esx-03a.corp.vmw <none> <none>
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/pvc-9cc97357-a213-4c4f-b731-2a1f86542db0 50Mi RWO Delete Bound test/nginx-logs k8s-policy 2m39s Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/nginx-logs Bound pvc-9cc97357-a213-4c4f-b731-2a1f86542db0 50Mi RWO k8s-policy 2m40s Filesystem
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/my-nginx LoadBalancer 10.96.0.239 10.40.14.69 80:30528/TCP 2m40s app=nginx
The external IP address for the service is 10.40.14.69, in the ingress range specified earlier. If you navigate to this IP address in a browser, you’ll get the standard nginx welcome page.

In NSX, you can see that a new load balancer has been created.

Note that the name ends in “test”, the name of our namespace. If you examine the virtual server associated with this load balancer, you will see the same IP address noted via the kubectl get svc
command.

Drilling down even further into the server pool associated with the virtual server, you’ll see the IP address associated with the pod (and note that it’s name includes “my-nginx”, the name of the service)

From a storage perspective, you can see the pv/pvc as vSphere objects by navigating to Inventory, Datastores, Monitor, Cloud Native Storage, Container Volumes and observing an object with a Volume Name matching the pvc name recorded earlier.

If you click on the Details card for this object, you’ll see the pv information as well.

Lastly, you can see that the nginx pod is actually a VM, created under a “test” namespace object in the Namespaces resource pool.

In my next post, I’ll go over enabling the Image Registry service (Harbor) in the Supervisor cluster, which will also make use of it’s own Supervisor namespace.