How to Deploy an OIDC-Enabled Cluster on vSphere in TKG 1.2

In one of my earlier blog posts, How to Configure Dex and Gangway for Active Directory Authentication in TKG, I walked through the process of creating an OIDC-enabled TKG cluster in the 1.1 version. With the 1.2 version, things are a little bit different. The process has some similarities to the Dex/Gangway configuration that was used previously but we now have the TMC Extension Manager in the mix. You can read more about the new extension framework in my previous post, Working with TKG Extensions in TKG 1.2…it should hopefully provide a good foundation for the work that will be completed in this article.

TKG has had extensions included in a manifest bundle since the 1.0 version but with the 1.2 version they have been much more tightly integrated and there are more of them. You’ll notice when you download the TKG cli bundle that there are several other utilities included now. These are part of the Carvel open-source project and may be used when working with extensions.

ytt – YAML templating tool
kapp – Kubernetes applications CLI
kbld – Kubernetes builder

Detailed instructions for configuring these binaries can be found at Configuring and Managing Tanzu Kubernetes Grid Shared Services but at a high level you’ll want to extract them all, configure them with a “short” name (remove everything after the first – character), make them executable if running Linux and copy them to a location in your path.

As before, a few important notes:

  • My AD domain is named corp.tanzu
  • My domain controller is named controlcenter.
  • I have a Linux VM named cli-vm where I execute most of the configuration commands.

You’ll need to export the AD CA certificate from a domain controller in Active Directory. The following steps are specific to my environment but can easily be modified.

  • StartRuncertsrv.msc
  • Right-click CONTROLCENTER-CA, select Properties
  • Click the View Certificate button
  • On the Details tab, click the Copy to File button.
  • Choose Base-64 encoded X.509 for format.
  • Set the file name to controlcenter-ca.cer.

Copy the controlcenter-ca.cer file to the /home/ubuntu folder on the cli-vm VM.

Note: The rest of the steps are carried out on the cli-vm.

Get the base64 encoded contents of the controlcenter-ca.cer file with no line breaks:

cat controlcenter-ca.cer | base64 | tr -d "\n\r"

LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZhekNDQTFPZ0F3SUJBZ0lRTWZaeTA4bXV2SVZLZFpWRHo3L3JZekFOQmdrcWhraUc5dzBCQVFzRkFEQkkKTVJVd0V3WUtDWkltaVpQeUxHUUJHUllGZEdGdWVuVXhGREFTQmdvSmtpYUprL0lzWkFFWkZnUmpiM0p3TVJrdwpGd1lEVlFRREV4QkRUMDVVVWs5TVEwVk9WRVZTTFVOQk1CNFhEVEl3TURneE9URTNNakEwTkZvWERUTXdNRGd4Ck9URTNNekF6TlZvd1NERVZNQk1HQ2dtU0pvbVQ4aXhrQVJrV0JYUmhibnAxTVJRd0VnWUtDWkltaVpQeUxHUUIKR1JZRVkyOXljREVaTUJjR0ExVUVBeE1RUTA5T1ZGSlBURU5GVGxSRlVpMURRVENDQWlJd0RRWUpLb1pJaHZjTgpBUUVCQlFBRGdnSVBBRENDQWdvQ2dnSUJBTEtJZFg3NjQzUHp2dFZYbHFOSXdEdU5xK3JoY0hGMGZqUjQxNGorCjFJR1FVdVhyeWtqaFNEdGhQUCs4QkdON21CZ0hUOEFqQVMxYjk1eGM4QjBTMkZobG4zQW9SRTl6MDNHdGZzQnUKRlNCUlVWd0FpZlg2b1h1OTdXemZmaHFQdHhaZkxKWGJoT29tamxrWDZpZmZBczJUT0xVeDJPajR3MnZ5Ymh6agpsY0E3MGFpKzBTbDZheFNvM2xNWjRLa3VaMldnZkVjYURqamozMy9wVjMvYm5GSys3eWRQdHRjMlRlazV4c0k4ClhOTWlySVZ4VWlVVDRZTHk0V0xpUzIwMEpVZmJwMVpuTXZuYlE4SnYxUW5abDlXN1dtQlBjZ3hSNEFBdWIwSzQKdlpMWHU2TVhpYm9UbHprTUIvWXRoQ2tUTmxKY0traEhmNjBZUi9UNlN4MVQybnVweUJhNGRlbzVVR1B6aFJpSgpwTjM3dXFxQWRLMXFNRHBDakFSalM2VTdMZjlKS2pmaXJpTHpMZXlBalA4a2FONFRkSFNaZDBwY1FvWlN4ZXhRCjluKzRFNE1RbTRFSjREclZaQ2lsc3lMMkJkRVRjSFhLUGM3cStEYjRYTTdqUEtORzVHUDFFTVY0WG9odjU4eVoKL3JSZm1LNjRnYXI4QU1uT0tUMkFQNjgxcWRaczdsbGpPTmNYVUFMemxYNVRxSWNoWVQwRFZRbUZMWW9NQmVaegowbDIxUWpiSzBZV25QemE2WWkvTjRtNnJGYkVCNFdYaXFoWVNreHpyTXZvY1ZVZ2Q0QUFQMXZmSE5uRkVzblVSCm5Tc2lnbEZIL3hseU8zY0JGcm1vWkF4YkEyMDkxWEhXaEI0YzBtUUVJM2hPcUFCOFVvRkdCclFwbVErTGVzb0MKMUxaOUFnTUJBQUdqVVRCUE1Bc0dBMVVkRHdRRUF3SUJoakFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZApEZ1FXQkJURkF4U3ZZNjRRNWFkaG04SVllY0hCQVV1b2J6QVFCZ2tyQmdFRUFZSTNGUUVFQXdJQkFEQU5CZ2txCmhraUc5dzBCQVFzRkFBT0NBZ0VBamcvdjRtSVA3Z0JWQ3c0cGVtdEduM1BTdERoL2FCOXZiV3lqQXl4U05hYUgKSDBuSUQ1cTV3b3c5dWVCaURmalRQbmhiZjNQNzY4SEc4b0wvKzlDK1ZtLzBsaUZCZCswL0RhYXlLcEFORk1MQgpCVitzMmFkV1JoUXVjTFFmWFB3dW04UnliV3Y4MndrUmtXQ0NkT0JhQXZBTXVUZ2swOFN3Skl5UWZWZ3BrM25ZCjBPd2pGd1NBYWR2ZXZmK0xvRC85TDhSOU5FdC9uNFdKZStMdEVhbW85RVZiK2wrY1lxeXh5dWJBVlkwWTZCTTIKR1hxQWgzRkVXMmFRTXB3b3VoLzVTN3c1b1NNWU42bWlZMW9qa2k4Z1BtMCs0K0NJTFBXaC9mcjJxME8vYlB0YgpUcisrblBNbVo4b3Y5ZXBOR0l1cWh0azVqYTIvSnVZK1JXNDZJUmM4UXBGMUV5VWFlMDJFNlUyVmFjczdHZ2UyCkNlU0lOa29MRkZtaUtCZkluL0hBY2hsbWU5YUw2RGxKOXdBcmVCREgzRThrSDdnUkRXYlNLMi9RRDBIcWFjK0UKZ2VHSHdwZy84T3RCT0hVTW5NN2VMT1hCSkZjSm9zV2YwWG5FZ1M0dWJnYUhncURFdThwOFBFN3JwQ3h0VU51cgp0K3gyeE9OSS9yQldnZGJwNTFsUHI3bzgxOXpQSkN2WVpxMVBwMXN0OGZiM1JsVVNXdmJRTVBGdEdBeWFCeStHCjBSZ1o5V1B0eUVZZ25IQWI1L0RxNDZzbmU5L1FuUHd3R3BqdjFzMW9FM1pGUWpodm5HaXM4K2RxUnhrM1laQWsKeWlEZ2hXN2FudHpZTDlTMUNDOHNWZ1ZPd0ZKd2ZGWHBkaWlyMzVtUWx5U0czMDFWNEZzUlYrWjBjRnA0TmkwPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==

If you have a management cluster created you’re ready to proceed with getting some of the prerequistes out of the way. We have to deploy the TMC extension manager and the Kapp controller first. There are yaml files already included in the extensions bundle. Make sure your kubectl context is set to the management cluster before you get started.

kubectl config use-context tkg-mgmt-admin@tkg-mgmt

Switched to context "tkg-mgmt-admin@tkg-mgmt".
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/tmc-extension-manager.yaml

namespace/vmware-system-tmc created
customresourcedefinition.apiextensions.k8s.io/agents.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensions.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionresourceowners.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionintegrations.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionconfigs.intents.tmc.cloud.vmware.com created
serviceaccount/extension-manager created
clusterrole.rbac.authorization.k8s.io/extension-manager-role created
clusterrolebinding.rbac.authorization.k8s.io/extension-manager-rolebinding created
service/extension-manager-service created
deployment.apps/extension-manager created
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/kapp-controller.yaml

serviceaccount/kapp-controller-sa created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/apps.kappctrl.k14s.io created
deployment.apps/kapp-controller created
clusterrole.rbac.authorization.k8s.io/kapp-controller-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/kapp-controller-cluster-role-binding created

We can validate that extension-manager and kapp-controller are up and ready:

kubectl -n vmware-system-tmc get deployment

NAME                READY   UP-TO-DATE   AVAILABLE   AGE
extension-manager   1/1     1            1           9m25s
kapp-controller     1/1     1            1           3m25s

There is a yaml file that contains the definitions for the tanzu-system-auth namespace as well as some RBAC information that will need to be applied before we progress.

kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/authentication/dex/namespace-role.yaml

namespace/tanzu-system-auth created
serviceaccount/dex-extension-sa created
role.rbac.authorization.k8s.io/dex-extension-role created
rolebinding.rbac.authorization.k8s.io/dex-extension-rolebinding created
clusterrole.rbac.authorization.k8s.io/dex-extension-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/dex-extension-cluster-rolebinding created

As with Contour and Harbor previously, there is a yaml file that we’ll use to provide the Dex configuration information. We’ll make a copy of the sample file and then make the changes needed for our environment.

cp tkg-extensions-v1.2.0+vmware.1/extensions/authentication/dex/vsphere/ldap/dex-data-values.yaml.example dex-data-values.yaml

If you look at this file, you can see that there is a framework for what you’ll need to configure:

#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
dex:
  config:
    connector: ldap
    ldap:
      host: <LDAP_HOST>
      userSearch:
        baseDN: 'ou=people,dc=vmware,dc=com'
        filter: '(objectClass=posixAccount)'
        username: uid
        idAttr: uid
        emailAttr: mail
        nameAttr: 'givenName'
      groupSearch:
        baseDN: 'ou=group,dc=vmware,dc=com'
        filter: '(objectClass=posixGroup)'
        userAttr: uid
        groupAttr: memberUid
        nameAttr: cn
    #! Deploy dex first with dummy staticClients. Once gangway is installed in workload cluster, update static clients with gangway information
    #@overlay/replace
    staticClients:
    - id: WORKLOAD_CLUSTER_NAME
      redirectURIs:
      - 'https://WORKLOAD_CLUSTER_IP:30166/callback'
      name: WORKLOAD_CLUSTER_NAME
      secret: CLIENT_SECRET
dns:
  vsphere:
    #@overlay/replace
    ipAddresses: [<MANAGEMENT_CLUSTER_VIP>]

At a minimum for any deployment you’ll need to change <LDAP_HOST> to your LDAP server (Active Directory server in this case) and <MANAGEMENT_CLUSTER_VIP> to the static IP address assigned to the VIP on our management cluster (192.168.100.90 in this case). The placeholders in the staticClients: section can be left alone until Gangway is deployed.

The following is what my dex-data-values.yaml file looks like when completed:

#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
dex:
  config:
    connector: ldap
    ldap:
      host: controlcenter.corp.tanzu
      bindDN: 'cn=Administrator,cn=Users,dc=corp,dc=tanzu'
      bindPW: VMware1!
      rootCAData: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZhekNDQTFPZ0F3SUJBZ0lRTWZaeTA4bXV2SVZLZFpWRHo3L3JZekFOQmdrcWhraUc5dzBCQVFzRkFEQkkKTVJVd0V3WUtDWkltaVpQeUxHUUJHUllGZEdGdWVuVXhGREFTQmdvSmtpYUprL0lzWkFFWkZnUmpiM0p3TVJrdwpGd1lEVlFRREV4QkRUMDVVVWs5TVEwVk9WRVZTTFVOQk1CNFhEVEl3TURneE9URTNNakEwTkZvWERUTXdNRGd4Ck9URTNNekF6TlZvd1NERVZNQk1HQ2dtU0pvbVQ4aXhrQVJrV0JYUmhibnAxTVJRd0VnWUtDWkltaVpQeUxHUUIKR1JZRVkyOXljREVaTUJjR0ExVUVBeE1RUTA5T1ZGSlBURU5GVGxSRlVpMURRVENDQWlJd0RRWUpLb1pJaHZjTgpBUUVCQlFBRGdnSVBBRENDQWdvQ2dnSUJBTEtJZFg3NjQzUHp2dFZYbHFOSXdEdU5xK3JoY0hGMGZqUjQxNGorCjFJR1FVdVhyeWtqaFNEdGhQUCs4QkdON21CZ0hUOEFqQVMxYjk1eGM4QjBTMkZobG4zQW9SRTl6MDNHdGZzQnUKRlNCUlVWd0FpZlg2b1h1OTdXemZmaHFQdHhaZkxKWGJoT29tamxrWDZpZmZBczJUT0xVeDJPajR3MnZ5Ymh6agpsY0E3MGFpKzBTbDZheFNvM2xNWjRLa3VaMldnZkVjYURqamozMy9wVjMvYm5GSys3eWRQdHRjMlRlazV4c0k4ClhOTWlySVZ4VWlVVDRZTHk0V0xpUzIwMEpVZmJwMVpuTXZuYlE4SnYxUW5abDlXN1dtQlBjZ3hSNEFBdWIwSzQKdlpMWHU2TVhpYm9UbHprTUIvWXRoQ2tUTmxKY0traEhmNjBZUi9UNlN4MVQybnVweUJhNGRlbzVVR1B6aFJpSgpwTjM3dXFxQWRLMXFNRHBDakFSalM2VTdMZjlKS2pmaXJpTHpMZXlBalA4a2FONFRkSFNaZDBwY1FvWlN4ZXhRCjluKzRFNE1RbTRFSjREclZaQ2lsc3lMMkJkRVRjSFhLUGM3cStEYjRYTTdqUEtORzVHUDFFTVY0WG9odjU4eVoKL3JSZm1LNjRnYXI4QU1uT0tUMkFQNjgxcWRaczdsbGpPTmNYVUFMemxYNVRxSWNoWVQwRFZRbUZMWW9NQmVaegowbDIxUWpiSzBZV25QemE2WWkvTjRtNnJGYkVCNFdYaXFoWVNreHpyTXZvY1ZVZ2Q0QUFQMXZmSE5uRkVzblVSCm5Tc2lnbEZIL3hseU8zY0JGcm1vWkF4YkEyMDkxWEhXaEI0YzBtUUVJM2hPcUFCOFVvRkdCclFwbVErTGVzb0MKMUxaOUFnTUJBQUdqVVRCUE1Bc0dBMVVkRHdRRUF3SUJoakFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZApEZ1FXQkJURkF4U3ZZNjRRNWFkaG04SVllY0hCQVV1b2J6QVFCZ2tyQmdFRUFZSTNGUUVFQXdJQkFEQU5CZ2txCmhraUc5dzBCQVFzRkFBT0NBZ0VBamcvdjRtSVA3Z0JWQ3c0cGVtdEduM1BTdERoL2FCOXZiV3lqQXl4U05hYUgKSDBuSUQ1cTV3b3c5dWVCaURmalRQbmhiZjNQNzY4SEc4b0wvKzlDK1ZtLzBsaUZCZCswL0RhYXlLcEFORk1MQgpCVitzMmFkV1JoUXVjTFFmWFB3dW04UnliV3Y4MndrUmtXQ0NkT0JhQXZBTXVUZ2swOFN3Skl5UWZWZ3BrM25ZCjBPd2pGd1NBYWR2ZXZmK0xvRC85TDhSOU5FdC9uNFdKZStMdEVhbW85RVZiK2wrY1lxeXh5dWJBVlkwWTZCTTIKR1hxQWgzRkVXMmFRTXB3b3VoLzVTN3c1b1NNWU42bWlZMW9qa2k4Z1BtMCs0K0NJTFBXaC9mcjJxME8vYlB0YgpUcisrblBNbVo4b3Y5ZXBOR0l1cWh0azVqYTIvSnVZK1JXNDZJUmM4UXBGMUV5VWFlMDJFNlUyVmFjczdHZ2UyCkNlU0lOa29MRkZtaUtCZkluL0hBY2hsbWU5YUw2RGxKOXdBcmVCREgzRThrSDdnUkRXYlNLMi9RRDBIcWFjK0UKZ2VHSHdwZy84T3RCT0hVTW5NN2VMT1hCSkZjSm9zV2YwWG5FZ1M0dWJnYUhncURFdThwOFBFN3JwQ3h0VU51cgp0K3gyeE9OSS9yQldnZGJwNTFsUHI3bzgxOXpQSkN2WVpxMVBwMXN0OGZiM1JsVVNXdmJRTVBGdEdBeWFCeStHCjBSZ1o5V1B0eUVZZ25IQWI1L0RxNDZzbmU5L1FuUHd3R3BqdjFzMW9FM1pGUWpodm5HaXM4K2RxUnhrM1laQWsKeWlEZ2hXN2FudHpZTDlTMUNDOHNWZ1ZPd0ZKd2ZGWHBkaWlyMzVtUWx5U0czMDFWNEZzUlYrWjBjRnA0TmkwPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
      userSearch:
        baseDN: 'cn=Users,dc=corp,dc=tanzu'
        filter: '(objectClass=person)'
        username: userPrincipalName
        idAttr: DN
        emailAttr: userPrincipalName
        nameAttr: 'cn'
      groupSearch:
        baseDN: 'dc=corp,dc=tanzu'
        filter: '(objectClass=group)'
        userAttr: DN
        groupAttr: 'member:1.2.840.113556.1.4.1941:'
        nameAttr: cn
    #! Deploy dex first with dummy staticClients. Once gangway is installed in workload cluster, update static clients with gangway information
    #@overlay/replace
    staticClients:
    - id: WORKLOAD_CLUSTER_NAME
      redirectURIs:
      - 'https://WORKLOAD_CLUSTER_IP:30166/callback'
      name: WORKLOAD_CLUSTER_NAME
      secret: CLIENT_SECRET
dns:
  vsphere:
    #@overlay/replace
    ipAddresses: [192.168.100.90]

A few notes:

  • The bindDN: and bindPW: values should correspond to an Active Directory user that can query your domain.
  • The rootCAData: value should be the same as the base64 encoded Active Directory certificate value we generated earlier.
  • The baseDN: values in userSearch: and groupSearch: should reference the locations in Active Directory to search for users and groups that will be allowed to authenticate in to the cluster.
  • All other fields under userSearch: and groupSearch: should match the example above.

Once you have your dex-data-values.yaml file configured to your liking, you’ll need to create a secret out of it.

kubectl -n tanzu-system-auth create secret generic dex-data-values --from-file=values.yaml=dex-data-values.yaml

secret/dex-data-values created

You can run the following command to validate that the secret is created and contains the same information that is present in the dex-data-values.yaml file:

kubectl get secret dex-data-values -n tanzu-system-auth -o 'go-template={{ index .data "values.yaml" }}' | base64 -d

If the secret checks out, you can proceed with creating the dex extension and app.

kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/authentication/dex/dex-extension.yaml

extension.clusters.tmc.cloud.vmware.com/dex configured

You can check the status of the extension and app to make sure everything is getting deployed as expected.

kubectl -n tanzu-system-auth get extension
 dex

NAME   STATE   HEALTH   VERSION
dex    3
kubectl -n tanzu-system-auth get app dex -o yaml

  inspect:
    exitCode: 0
    stdout: |-
      Target cluster 'https://100.64.0.1:443'
      Resources in app 'dex-ctrl'
      Namespace          Name                      Kind                Owner    Conds.  Rs  Ri  Age
      (cluster)          dex                       ClusterRole         kapp     -       ok  -   8s
      ^                  dex                       ClusterRoleBinding  kapp     -       ok  -   8s
      ^                  tanzu-system-auth         Namespace           kapp     -       ok  -   1m
      tanzu-system-auth  dex                       Deployment          kapp     2/2 t   ok  -   8s
      ^                  dex                       ServiceAccount      kapp     -       ok  -   8s
      ^                  dex-559495dbdd            ReplicaSet          cluster  -       ok  -   8s
      ^                  dex-559495dbdd-r8n8w      Pod                 cluster  4/4 t   ok  -   7s
      ^                  dex-cert                  Certificate         kapp     1/1 t   ok  -   8s
      ^                  dex-cert-scnd2            CertificateRequest  cluster  1/1 t   ok  -   5s
      ^                  dex-selfsigned-ca-issuer  Issuer              kapp     1/1 t   ok  -   8s
      ^                  dex-ver-1                 ConfigMap           kapp     -       ok  -   8s
      ^                  dexsvc                    Endpoints           cluster  -       ok  -   8s
      ^                  dexsvc                    Service             kapp     -       ok  -   8s
      Rs: Reconcile state
      Ri: Reconcile information
      13 resources
      Succeeded
    updatedAt: "2020-10-08T00:08:44Z"

When the deployment is done, you should have a single pod running in the tanzu-system-auth namespace.

kubectl -n tanzu-system-auth get po

NAME                   READY   STATUS    RESTARTS   AGE
dex-559495dbdd-r8n8w   1/1     Running   0          36s

Be sure to grab the Dex certificate as it will be needed when configuration Gangway.

kubectl get secret dex-cert-tls -n tanzu-system-auth -o 'go-template={{ index .data "ca.crt" }}' | base64 -d

-----BEGIN CERTIFICATE-----
MIIDGjCCAgKgAwIBAgIRAIHdE4HsK57jnTPimCmbjKUwDQYJKoZIhvcNAQELBQAw
IzEPMA0GA1UEChMGdm13YXJlMRAwDgYDVQQDEwd0a2ctZGV4MB4XDTIwMTAwODAw
MDgzOVoXDTIxMDEwNjAwMDgzOVowIzEPMA0GA1UEChMGdm13YXJlMRAwDgYDVQQD
Ewd0a2ctZGV4MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyddzNVHt
iyrEN7okr8wSEW9EYE7m3swC+njl7v+a8HvnWMPv09F0VdsVMihnQDqDOQBqaRqv
AVGmWw63NzvQCs4Ha3c43a3VwhptpN3B/hHfziELvcuYXtn6Rj1LUiRClSNtW3r+
AjPb3VFY6mSenvggVCFflsjWpUijagE6W0BfK5GqlcBGJnSaZPL+c2p6R4VXbrxu
IrsCRCTwftEheKIqmERmsU4oT/wgTzX1suSv5wYf1hTO9XJsZTwJvAFJL/oxIZfg
rtwsrwY6FvICYF3hPz2lDam7KSAGF+hIfx3tryUHXt+ZoKNTr7Fe4gzP6Zi4hhab
UQVhkyU7ls7pcwIDAQABo0kwRzAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUH
AwIwDAYDVR0TAQH/BAIwADAYBgNVHREEETAPggd0a2ctZGV4hwTAqG5lMA0GCSqG
SIb3DQEBCwUAA4IBAQBJ+W8GALy+F9qav+HD3qQC+Jms5reDuB5E0bkWVccT4nM1
p3Zt0+6vV+evEHxFLdk2wQeHavB6kXbpkht4X4x4OAx8QrRp9hLfwsr4ZWE8ph+C
b1Kz8z3DQdNtsaXQPO+nkE+is8MDzhIGthWQ2R3qieP+fbeeLjE4E6ZoSHQDuSgv
PgMafM33Zli3kYjsRJ3tfQfRHHhqObKKgpGwf6aF+Adgt9yFO/LkqyRE57vxqwuO
3PY7bvB2N4KSXvMfauKa8+O575uRV7lCn7Qb/fnRbnrOJ23vbsq9kww3TkQsiWCl
yQGP+oLYbqaymKcZw2fWwUvsYnXc03z6KutizMbV
-----END CERTIFICATE-----

You can also save this certificate to a file and then import it into your Trusted Root Certificate Authority to prevent certificate errors later.

You’re now ready to move on to creating an OIDC-enabled cluster. This is a little easier in 1.2 as there is a single switch that can be passed instead of having to use a different plan. We need to set a few environment variables first though (192.168.100.90 is the static VIP for my management cluster).

export OIDC_USERNAME_CLAIM=email
export OIDC_ISSUER_URL=https://192.168.100.90:30167
export OIDC_GROUPS_CLAIM=groups
export OIDC_DEX_CA=$(kubectl get secret dex-cert-tls -n tanzu-system-auth -o 'go-template={{ index .data "ca.crt" }}' | base64 -d | gzip | base64)

With these variables set, we can now create our workload cluster. The only real difference from a normal cluster creation is that we’re going to pass in the --enable-cluster-options oidc switch.

tkg create cluster auth-cluster --enable-cluster-options oidc -p dev --vsphere-controlplane-endpoint-ip 192.168.100.91

Logs of the command execution can also be found at: /tmp/tkg-20201007T181101820677685.log
Validating configuration...
Creating workload cluster 'auth-cluster'...
Waiting for cluster to be initialized...
Waiting for cluster nodes to be available...
Waiting for addons installation...

Workload cluster 'auth-cluster' created

At this point you can get the credentials for the workload cluster and switch contexts to it.

tkg get credentials auth-cluster

Credentials of workload cluster 'auth-cluster' have been saved
You can now access the cluster by running 'kubectl config use-context auth-cluster-admin@auth-cluster'
kubectl config use-context auth-cluster-admin@auth-cluster

Switched to context "auth-cluster-admin@auth-cluster".

Just as we did with the management cluster, we need to install extension-manager and kapp-controller but we also need to install cert-manager (cert-manager was installed by default on the management cluster).

kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/tmc-extension-manager.yaml

namespace/vmware-system-tmc created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/agents.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensions.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionresourceowners.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionintegrations.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionconfigs.intents.tmc.cloud.vmware.com created
serviceaccount/extension-manager created
clusterrole.rbac.authorization.k8s.io/extension-manager-role created
clusterrolebinding.rbac.authorization.k8s.io/extension-manager-rolebinding created
service/extension-manager-service created
deployment.apps/extension-manager created
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/kapp-controller.yaml

serviceaccount/kapp-controller-sa created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/apps.kappctrl.k14s.io created
deployment.apps/kapp-controller created
clusterrole.rbac.authorization.k8s.io/kapp-controller-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/kapp-controller-cluster-role-binding created
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/cert-manager/

namespace/cert-manager created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
Warning: admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
Warning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

Just like with Dex, there is a yaml file that contains the definitions for the tanzu-system-auth namespace as well as some RBAC information that will need to be applied before we progress.

kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/authentication/gangway/namespace-role.yaml

namespace/tanzu-system-auth created
serviceaccount/gangway-extension-sa created
role.rbac.authorization.k8s.io/gangway-extension-role created
rolebinding.rbac.authorization.k8s.io/gangway-extension-rolebinding created

We need to create a couple of values that will be used when configuring Gangway.

CLIENT_SECRET=$(openssl rand -hex 16)
SESSION_KEY=$(openssl rand -hex 16)

echo $CLIENT_SECRET
d621e20cb102a9b6af4aa9edd6a65ec0
echo $SESSION_KEY
b65ed56e44ab748239cb76284b5b432c

Make a note of these values as they will be used shortly.

You’ve probably already guessed what’s coming next…another data-values.yaml file. This one is gangway-data-values.yaml and we’ll create it by copying the example file.

cp tkg-extensions-v1.2.0+vmware.1/extensions/authentication/gangway/vsphere/gangway-data-values.yaml.example gangway-data-values.yaml

If you look at this file, you can see that there is a framework for what you’ll need to configure:

#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
gangway:
  config:
    clusterName: <WORKLOAD_CLUSTER_NAME>
    MGMT_CLUSTER_IP: <MANAGEMENT_CLUSTER_VIP>
    clientID: <WORKLOAD_CLUSTER_NAME>
    APISERVER_URL: <WORKLOAD_CLUSTER_VIP>
  secret:
    sessionKey: <SESSION_KEY>
    clientSecret: <CLIENT_SECRET>
dns:
  vsphere:
    #@overlay/replace
    ipAddresses: [<WORKLOAD_CLUSTER_VIP>]
dex:
  ca: |
  <INSERT_DEX_CA_CERT>

All of the placeholder values will need to be set appropriately

  • <WORKLOAD_CLUSTER_NAME> is the name of the workload cluster (auth-cluster in this case)
  • <MANAGEMENT_CLUSTER_IP> is the static VIP assigned to the management cluster (192.168.100.90 in this case)
  • <WORKLOAD_CLUSTER_VIP> is the static VIP assigned to the workload cluster (192.168.100.91 in this case)
  • <SESSION_KEY> is the $SESSION_KEY variable defined earlier (b65ed56e44ab748239cb76284b5b432c in this case)
  • <CLIENT_SECRET> is the $CLIENT_SECRET variable defined earlier (d621e20cb102a9b6af4aa9edd6a65ec0 in this case)
  • <INSERT_DEX_CA_CERT> is the Dex certificate that we made note of earlier.

The following is what my gangeay-data-values.yaml file looks like when completed:

#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
gangway:
  config:
    clusterName: auth-cluster
    MGMT_CLUSTER_IP: 192.168.100.90
    clientID: auth-cluster
    APISERVER_URL: 192.168.100.91
  secret:
    sessionKey: b65ed56e44ab748239cb76284b5b432c
    clientSecret: d621e20cb102a9b6af4aa9edd6a65ec0
dns:
  vsphere:
    #@overlay/replace
    ipAddresses: [192.168.100.91]
dex:
  ca: |
    -----BEGIN CERTIFICATE-----
    MIIDGjCCAgKgAwIBAgIRAIHdE4HsK57jnTPimCmbjKUwDQYJKoZIhvcNAQELBQAw
    IzEPMA0GA1UEChMGdm13YXJlMRAwDgYDVQQDEwd0a2ctZGV4MB4XDTIwMTAwODAw
    MDgzOVoXDTIxMDEwNjAwMDgzOVowIzEPMA0GA1UEChMGdm13YXJlMRAwDgYDVQQD
    Ewd0a2ctZGV4MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyddzNVHt
    iyrEN7okr8wSEW9EYE7m3swC+njl7v+a8HvnWMPv09F0VdsVMihnQDqDOQBqaRqv
    AVGmWw63NzvQCs4Ha3c43a3VwhptpN3B/hHfziELvcuYXtn6Rj1LUiRClSNtW3r+
    AjPb3VFY6mSenvggVCFflsjWpUijagE6W0BfK5GqlcBGJnSaZPL+c2p6R4VXbrxu
    IrsCRCTwftEheKIqmERmsU4oT/wgTzX1suSv5wYf1hTO9XJsZTwJvAFJL/oxIZfg
    rtwsrwY6FvICYF3hPz2lDam7KSAGF+hIfx3tryUHXt+ZoKNTr7Fe4gzP6Zi4hhab
    UQVhkyU7ls7pcwIDAQABo0kwRzAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUH
    AwIwDAYDVR0TAQH/BAIwADAYBgNVHREEETAPggd0a2ctZGV4hwTAqG5lMA0GCSqG
    SIb3DQEBCwUAA4IBAQBJ+W8GALy+F9qav+HD3qQC+Jms5reDuB5E0bkWVccT4nM1
    p3Zt0+6vV+evEHxFLdk2wQeHavB6kXbpkht4X4x4OAx8QrRp9hLfwsr4ZWE8ph+C
    b1Kz8z3DQdNtsaXQPO+nkE+is8MDzhIGthWQ2R3qieP+fbeeLjE4E6ZoSHQDuSgv
    PgMafM33Zli3kYjsRJ3tfQfRHHhqObKKgpGwf6aF+Adgt9yFO/LkqyRE57vxqwuO
    3PY7bvB2N4KSXvMfauKa8+O575uRV7lCn7Qb/fnRbnrOJ23vbsq9kww3TkQsiWCl
    yQGP+oLYbqaymKcZw2fWwUvsYnXc03z6KutizMbV
    -----END CERTIFICATE-----

Once you have your gangway-data-values.yaml file configured to your liking, you’ll need to create a secret out of it.

kubectl -n tanzu-system-auth create secret generic gangway-data-values --from-file=values.yaml=gangway-data-values.yaml

secret/gangway-data-values created

You can run the following command to validate that the secret is created and contains the same information that is present in the gangway-data-values.yaml file:

kubectl get secret gangway-data-values -n tanzu-system-auth -o 'go-template={{ index .data "values.yaml" }}' | base64 -d

If the secret checks out, you can proceed with creating the dex extension and app….but first we need to update dex-data-values secret in the management cluster with some of the values we just created.

Switch your context back to the management cluster.

kubectl config use-context tkg-mgmt-admin@tkg-mgmt

Open up the dex-data-values.yaml file in a text editor and update the placeholder values in the staticClients: section.

gangway information
    #@overlay/replace
    staticClients:
    - id: WORKLOAD_CLUSTER_NAME
      redirectURIs:
      - 'https://WORKLOAD_CLUSTER_IP:30166/callback'
      name: WORKLOAD_CLUSTER_NAME
      secret: CLIENT_SECRET
gangway information
    #@overlay/replace
    staticClients:
    - id: tkg-wld
      redirectURIs:
      - 'https://192.168.100.91:30166/callback'
      name: tkg-wld
      secret: d621e20cb102a9b6af4aa9edd6a65ec0

Issue the following command to update the dex-data-values secret:

kubectl create secret generic dex-data-values --from-file=values.yaml=dex-data-values.yaml -n tanzu-system-auth -o yaml --dry-run | kubectl replace -f -

Switch your context back to the workload cluster.

kubectl config use-context tkg-wld-admin@tkg-wld

And now you can proceed with deploying the gangway bits to the workload cluster.

kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/authentication/gangway/gangway-extension.yaml

extension.clusters.tmc.cloud.vmware.com/gangway configured

You can check the status of the extension and app to make sure everything is getting deployed as expected.

kubectl -n tanzu-system-auth get extension gangway

NAME      STATE   HEALTH   VERSION
gangway   3
kubectl -n tanzu-system-auth get app gangway -o yaml

  inspect:
    exitCode: 0
    stdout: |-
      Target cluster 'https://100.64.0.1:443'
      Resources in app 'gangway-ctrl'
      Namespace          Name                          Kind                Owner    Conds.  Rs  Ri  Age
      tanzu-system-auth  dex-ca                        ConfigMap           kapp     -       ok  -   31s
      ^                  gangway                       Deployment          kapp     2/2 t   ok  -   31s
      ^                  gangway                       Secret              kapp     -       ok  -   31s
      ^                  gangway-76dfb45b95            ReplicaSet          cluster  -       ok  -   31s
      ^                  gangway-76dfb45b95-hldl9      Pod                 cluster  4/4 t   ok  -   30s
      ^                  gangway-cert                  Certificate         kapp     1/1 t   ok  -   30s
      ^                  gangway-cert-bqsgv            CertificateRequest  cluster  1/1 t   ok  -   29s
      ^                  gangway-selfsigned-ca-issuer  Issuer              kapp     1/1 t   ok  -   30s
      ^                  gangway-ver-1                 ConfigMap           kapp     -       ok  -   31s
      ^                  gangwaysvc                    Endpoints           cluster  -       ok  -   30s
      ^                  gangwaysvc                    Service             kapp     -       ok  -   30s
      Rs: Reconcile state
      Ri: Reconcile information
      11 resources
      Succeeded
    updatedAt: "2020-10-08T00:48:16Z"

When the deployment is done, you should have a single pod running in the tanzu-system-auth namespace.

kubectl -n tanzu-system-auth get po

NAME                       READY   STATUS    RESTARTS   AGE
gangway-559495dbdd-r8n8w   1/1     Running   0          36s

As with the Dex certificate, you can also grab the Gangway certificate and import it into your Trusted Root Certificate Authority to avoid certificate errors.

kubectl get secret gangway-cert-tls -n tanzu-system-auth -o 'go-template={{ index .data "ca.crt" }}' | base64 -d

-----BEGIN CERTIFICATE-----
MIIDJTCCAg2gAwIBAgIQbxhDlpaYxZFBoMrQj4B3vTANBgkqhkiG9w0BAQsFADAn
MQ8wDQYDVQQKEwZ2bXdhcmUxFDASBgNVBAMTC3RrZy1nYW5nd2F5MB4XDTIwMTAw
ODAwNDc0OFoXDTIxMDEwNjAwNDc0OFowJzEPMA0GA1UEChMGdm13YXJlMRQwEgYD
VQQDEwt0a2ctZ2FuZ3dheTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
ANLkwn+Dz3YANL+cA57EglY54mDwJ1xSeZtuswreAZ5J/v/Q3aDnIJYCQZkmbCm5
NHdxz656ySTakaCk4s9EdN0rEu98MjjMjn+JcMLNm6gG8wcez+X/EJ/jd2ykl//M
ULj8OaMDn6IQh5ZsY706GqTC+SaIw7q9AviR34wuAZtbLnHlV6Kmhc152BXGclSs
UPTKNVlTPLrKw/U4O3Nn8dRZBzQCDI4yGBNzONEY1diDh4/aULfimZsSKggSq7Jh
y2NJ6T2his77qvDna37gc7fvAJklkUatQb77D7VLIiLtamMC39HmqYjy2FZTCaVZ
h8viV5dfypZ24Bfw3ClBUj8CAwEAAaNNMEswHQYDVR0lBBYwFAYIKwYBBQUHAwEG
CCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHAYDVR0RBBUwE4ILdGtnLWdhbmd3YXmH
BMCobmcwDQYJKoZIhvcNAQELBQADggEBAAntxeVqMOv4D0vc8K2IVYRZs9SQ5dZL
VzjVrNwreg9gMeHvNcl/AuJcbFyJWQF/1JFpeFEzOoFK7HGZkl3HEvaWC1pSiizJ
TOKKk/WsFIcWqw5WIXakGgx2ZLlB1dLFKikp5ar6fY/38aAR0vhtNVKe/9FbruTM
jZ9J8nj6DLnee497ELOI+gpw9lN7ZGzC2h+XDtdZfTUGLKpDLTcEwWp73JBTEEiB
4k2ZGbutMA2KjdzF3Fk025QFFhwZt/IC9ZphSsZMu4IXXFN7WqdFlbhnn61/FV3K
zp1Ue8HAk+lj9VyLROSJauclUCZThQENaQESEhl0dTfDB64P3hwNTaE=
-----END CERTIFICATE-----

If you launch a browser and navigate to the static VIP on your workload cluster at port 30166 (https://192.168.100.91:30166 in this case) you’ll be presented with an opportunity to log in.

Click the Sign In button.

Enter credentials that you would like to use for authenticating against your workload cluster.

Click the Login button.

You’ll be presented with a page similar to the following:

If you watch the logs from the dex pod in the management cluster, you can see the results of the query.

{"level":"info","msg":"performing ldap search cn=Users,dc=corp,dc=tanzu sub (\u0026(objectClass=person)(userPrincipalName=tkguser@corp.tanzu))","time":"2020-10-08T12:51:51Z"}
{"level":"info","msg":"username \"tkguser@corp.tanzu\" mapped to entry CN=tkguser,CN=Users,DC=corp,DC=tanzu","time":"2020-10-08T12:51:51Z"}
{"level":"info","msg":"performing ldap search dc=corp,dc=tanzu sub (\u0026(objectClass=group)(member:1.2.840.113556.1.4.1941:=CN=tkguser,CN=Users,DC=corp,DC=tanzu))","time":"2020-10-08T12:51:51Z"}
{"level":"info","msg":"login successful: connector \"ldap\", username=\"tkguser\", preferred_username=\"\", email=\"tkguser@corp.tanzu\", groups=[\"Administrators\" \"tkgadmins\"]","time":"2020-10-08T12:51:51Z"}

The connection was successful and we know that the tkguser user is in the Administrators and tkgadmins groups. We’ll need to know this later for creating a clusterrolebinding.

You can click the Download Kubeconfig button and use this file with the --kubeconfig= switch when running kubectl commands but using the commands noted on the TKG Authentication page results in a default kubeconfig file being created or updated. You can also change the name of the context being created in the kubectl config set-context command (tkguser is what was used in this example):

echo "-----BEGIN CERTIFICATE-----
MIICyzCCAbOgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
cm5ldGVzMB4XDTIwMTAwODAwMDYwOVoXDTMwMTAwNjAwMTEwOVowFTETMBEGA1UE
AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMwB
gto7AfeiY7B8btDAPiMNk9WhxhvujaUFRZR6596mhFsiWZ+YXNcxeFnrfQ8NtedR
NGEX4KX+1AfEHNmQVqju/qs1k4koqQHtyOH7TPqKUswyACsOpeskIq2Zp+s5m4ox
gKZPneXS0t0H+JOq98NehPWc0PolhGM8JENgSEpctFA1gmLMR5PAUm7fnDRYe9DU
AMbiNDiiadVG71e3W+OY6IoVOqhLGTOL3+SLNFtG541v+EimsDu+pL/r5xCkrMwd
ya3/ozcHEt1m0rRboOMHn6tWeGeWfCYvFWBHY/OfFjr5c66xstcIniwZ3eowg0Hj
0U+bsUhG1B6NL/nyprcCAwEAAaMmMCQwDgYDVR0PAQH/BAQDAgKkMBIGA1UdEwEB
/wQIMAYBAf8CAQAwDQYJKoZIhvcNAQELBQADggEBABUR5xz90PMnUIAzIriF47S0
loG1f8Bgt0jTRg93CsM4xN5F+OPxfxmSGYS+a7ihtgmW/W5NCOt71v8B1UM5RG36
mXDvGoj4V38lsGN3x0AIZI0hqZYNbbtm/cDCmDisGMHaSbqAFemroXRvKHDWZh09
j3pfpd3YkGN9RxltMZDTuHmjpViIbYK+8ckiWjXCLTecWH3O+9p9vCdoU9r2RQJp
B6/sXFC3z+xI27kKJNr75XD4TdOQCzy0rKrnqkzjHRKAJX9DVYp8N/7dnHZcLxmD
Vg1yRPLvchhImuQlk5bL2X7vC1g7LGXpIB1mGegWBaNYUC6Cm9AC3Cb72kOP8gY=
-----END CERTIFICATE-----
" \ > ca-auth-cluster.pem

kubectl config set-cluster auth-cluster --server=https://192.168.100.91:6443 --certificate-authority=ca-auth-cluster.pem --embed-certs
kubectl config set-credentials tkguser@corp.tanzu@auth-cluster  \
    --auth-provider=oidc  \
    --auth-provider-arg='idp-issuer-url=https://192.168.100.90:30167'  \
    --auth-provider-arg='client-id=auth-cluster'  \
    --auth-provider-arg='client-secret=d621e20cb102a9b6af4aa9edd6a65ec0' \
    --auth-provider-arg='refresh-token=Chl4bmhvNHRzZm0zaGJmZ2E2d2wzNHoyN2V6EhluNnp0N2lmZG40NnN1bWx3enpidGRiZTVu' \
    --auth-provider-arg='id-token=eyJhbGciOiJSUzI1NiIsImtpZCI6IjI5OWY1MTBhYTRiZGRlZDdiNjEyNTBmNmQ2N2UwMzUwMjYzMWFjOGMifQ.eyJpc3MiOiJodHRwczovLzE5Mi4xNjguMTEwLjEwMTozMDE2NyIsInN1YiI6IkNpUkRUajEwYTJkMWMyVnlMRU5PUFZWelpYSnpMRVJEUFdOdmNuQXNSRU05ZEdGdWVuVVNCR3hrWVhBIiwiYXVkIjoiYXV0aC1jbHVzdGVyIiwiZXhwIjoxNjAyMTE4ODAwLCJpYXQiOjE2MDIxMTg1MDAsImF0X2hhc2giOiJEaERaRTFQc1Q3Y2NLRUFNaXM2aF9RIiwiZW1haWwiOiJ0a2d1c2VyQGNvcnAudGFuenUiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwiZ3JvdXBzIjpbIkFkbWluaXN0cmF0b3JzIl0sIm5hbWUiOiJ0a2d1c2VyIn0.yye05--cOlum3lj4du6DLXTKOfH2UA1vrfzwJisjs_XEFApkLoVpySXi6tQ8AStLTDjae5TaO_FHDHFo53xINchVe2ZgWu1oBdv15615DnXF-jupi20HmN99YlPqw5Mc_2gQv9iSb5XSc3Zd5GSSSWYofYFZBoqjIjzsIf2QssVc2hnHZyUsvWj4AOV-QwhbRlaHqc-SfjRv6iK8RQ-JdcPk4ESV0vrkwOBHJmG5TgkfXU784K5_GFfgY8AUmMYNmTpVx-U8XBdWKpnDh0Cqh4_7vT0NsaOlZ-w_nvvDNA9mIyuUujmISOPo8QQ60ngRGONDY2GRh7fREo_o7YJTvw' \
    --auth-provider-arg='idp-certificate-authority-data=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHakNDQWdLZ0F3SUJBZ0lSQUlIZEU0SHNLNTdqblRQaW1DbWJqS1V3RFFZSktvWklodmNOQVFFTEJRQXcKSXpFUE1BMEdBMVVFQ2hNR2RtMTNZWEpsTVJBd0RnWURWUVFERXdkMGEyY3RaR1Y0TUI0WERUSXdNVEF3T0RBdwpNRGd6T1ZvWERUSXhNREV3TmpBd01EZ3pPVm93SXpFUE1BMEdBMVVFQ2hNR2RtMTNZWEpsTVJBd0RnWURWUVFECkV3ZDBhMmN0WkdWNE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeWRkek5WSHQKaXlyRU43b2tyOHdTRVc5RVlFN20zc3dDK25qbDd2K2E4SHZuV01QdjA5RjBWZHNWTWloblFEcURPUUJxYVJxdgpBVkdtV3c2M056dlFDczRIYTNjNDNhM1Z3aHB0cE4zQi9oSGZ6aUVMdmN1WVh0bjZSajFMVWlSQ2xTTnRXM3IrCkFqUGIzVkZZNm1TZW52Z2dWQ0ZmbHNqV3BVaWphZ0U2VzBCZks1R3FsY0JHSm5TYVpQTCtjMnA2UjRWWGJyeHUKSXJzQ1JDVHdmdEVoZUtJcW1FUm1zVTRvVC93Z1R6WDFzdVN2NXdZZjFoVE85WEpzWlR3SnZBRkpML294SVpmZwpydHdzcndZNkZ2SUNZRjNoUHoybERhbTdLU0FHRitoSWZ4M3RyeVVIWHQrWm9LTlRyN0ZlNGd6UDZaaTRoaGFiClVRVmhreVU3bHM3cGN3SURBUUFCbzBrd1J6QWRCZ05WSFNVRUZqQVVCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFZQmdOVkhSRUVFVEFQZ2dkMGEyY3RaR1Y0aHdUQXFHNWxNQTBHQ1NxRwpTSWIzRFFFQkN3VUFBNElCQVFCSitXOEdBTHkrRjlxYXYrSEQzcVFDK0ptczVyZUR1QjVFMGJrV1ZjY1Q0bk0xCnAzWnQwKzZ2VitldkVIeEZMZGsyd1FlSGF2QjZrWGJwa2h0NFg0eDRPQXg4UXJScDloTGZ3c3I0WldFOHBoK0MKYjFLejh6M0RRZE50c2FYUVBPK25rRStpczhNRHpoSUd0aFdRMlIzcWllUCtmYmVlTGpFNEU2Wm9TSFFEdVNndgpQZ01hZk0zM1psaTNrWWpzUkozdGZRZlJISGhxT2JLS2dwR3dmNmFGK0FkZ3Q5eUZPL0xrcXlSRTU3dnhxd3VPCjNQWTdidkIyTjRLU1h2TWZhdUthOCtPNTc1dVJWN2xDbjdRYi9mblJibnJPSjIzdmJzcTlrd3czVGtRc2lXQ2wKeVFHUCtvTFlicWF5bUtjWncyZld3VXZzWW5YYzAzejZLdXRpek1iVgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==' \
kubectl config set-context tkguser --cluster=auth-cluster --user=tkguser@corp.tanzu@auth-cluster
kubectl config use-context tkguser
rm ca-auth-cluster.pem

Create a clusterrolebinding specification for an AD group that the AD user is a member of (tkgadmins is the group and tkguser is the user in this example, both pre-created in Active Directory):

echo "
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tkgadmins
subjects:
  - kind: Group
    name: tkgadmins
    apiGroup: ""
roleRef:
  kind: ClusterRole #this must be Role or ClusterRole
  name: cluster-admin # this must match the name of the Role or ClusterRole you wish to bind to
  apiGroup: rbac.authorization.k8s.io" > tkgadmin-clusterrolebinding.yaml

Switch your context back to the one TKG created for the workload cluster:

kubectl config use-context auth-cluster-admin@auth-cluster

Switched to context "auth-cluster-admin@auth-cluster".

Create the clusterrolebinding in the workload cluster:

kubectl create -f tkgadmin-clusterrolebinding.yaml

clusterrolebinding.rbac.authorization.k8s.io/tkgadmins created

Switch your context to the tkguser context in the workload cluster and validate that the clusterrolebinding is allowing access:

kubectl config use-context tkguser 

Switched to context "tkguser". 
kubectl get nodes 

NAME                                 STATUS   ROLES    AGE   VERSION
auth-cluster-control-plane-6q9kh     Ready    master   53m   v1.19.1+vmware.2
auth-cluster-md-0-575d5c75c8-kbqhx   Ready    <none>   51m   v1.19.1+vmware.2

2 thoughts on “How to Deploy an OIDC-Enabled Cluster on vSphere in TKG 1.2”

  1. Pingback: Deploying NSX Advanced Load Balancer for use with Tanzu Kubernetes Grid (1.3 release) and vSphere with Tanzu (7.0 U2 release) – Little Stuff

  2. Pingback: Installing Tanzu Kubernetes Grid 1.3 on vSphere with NSX Advanced Load Balancer – Little Stuff

Leave a Comment

Your email address will not be published. Required fields are marked *