This is loosely based off of the instructions at https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-manage-instance-authentication.html with some help from https://brianragazzi.wordpress.com/2020/05/12/configure-tanzu-kubernetes-grid-to-use-active-directory/. It was not a trivial process but the end result was worth it…being able to access your TKG clusters via an AD username/password.
I started out with a a TKG management cluster deployed via the instructions in my blog post, Installing TKG 1.1 on vSphere, Step-By-Step, and then deployed MetalLB per my recent blog post, How to Deploy MetalLB with BGP in a Tanzu Kubernetes Grid 1.1 Cluster. Make sure that the TKG Extensions files are downloaded and extracted. You’ll need access to your Active-Directory infrastructure or access to someone who does.
A few important notes:
- My AD domain is named corp.local
- My domain controller is named controlcenter.
- I have a Linux VM named cli-vm where I execute most of the configuration commands.
- The pre-configured Dex and Gangway port numbers are changed in the configuration files since we’ll be accessing them via a LoadBalancer service instead of the default NodePort service.
You’ll need to export the AD CA certificate from a domain controller in Active Directory. The following steps are specific to my environment but can easily be modified.
- Start, Run,
certsrv.msc
- Right-click CONTROLCENTER-CA, select Properties
- Click the View Certificate button
- On the Details tab, click the Copy to File button.
- Choose Base-64 encoded X.509 for format.
- Set the file name to
controlcenter-ca.cer
.
Copy the controlcenter-ca.cer
file to the /home/ubuntu
folder on the cli-vm VM.
Note: The rest of the steps are carried out on the cli-vm.
Get the base64 encoded contents of the controlcenter-ca.cer
file with no line breaks:
cat controlcenter-ca.cer | base64 | tr -d "\n\r"
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURiVENDQWxXZ0F3SUJBZ0lRZHV3Z0J1TDZOcHhFSkxaTkpZN1pHekFOQmdrcWhraUc5dzBCQVFVRkFEQkkKTVJVd0V3WUtDWkltaVpQeUxHUUJHUllGYkc5allXd3hGREFTQmdvSmtpYUprL0lzWkFFWkZnUmpiM0p3TVJrdwpGd1lEVlFRREV4QkRUMDVVVWs5TVEwVk9WRVZTTFVOQk1DQVhEVEUwTURNd05qRTNNamcxTVZvWUR6SXdOalF3Ck16QTJNVGN6T0RVd1dqQklNUlV3RXdZS0NaSW1pWlB5TEdRQkdSWUZiRzlqWVd3eEZEQVNCZ29Ka2lhSmsvSXMKWkFFWkZnUmpiM0p3TVJrd0Z3WURWUVFERXhCRFQwNVVVazlNUTBWT1ZFVlNMVU5CTUlJQklqQU5CZ2txaGtpRwo5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBclcyTTJ5SUNXbjFJUE5zekxCcXFWaldabGtwMlR0cTJ4eFlnCkM2YTRzd0lTaW9Dc2R2cnhFT1Azd1pmRFlKRjFMUmpaYXc5THJxa1lRdExRQW9JR0JDRHJRM1RWdlpVK1BtdEMKYWtReXpZb2U2K0hqYjNoZmM3RkJYanFmckVZM0RNc2V1NjlDTWVYcVVXNHZoeVJ2Y3RRYURkYmozWjkxKzV2ZApJa3hxMFB3c3lpQmFIcjhhNkg4RWd5WE9ZbDFJMmhwSTRWaGczaitoMzNmVWJaZVpIV20vc2Y4bGRmalRmbVVJCnBFdnlnenRMNEFuVHQ1VVFiNVNYRDBkc2lyZS93Tmg3c3lZUFFvTW5PL0hqSFlTSDhFcFFsZkVnYWRmV0NIdU0KYWkxdmc4NTNSRGNuakpFZ3l3eEFNTnE0WEUrU1JBRUN2TjZWS2xkUk9WU1AzMnFxaFFJREFRQUJvMUV3VHpBTApCZ05WSFE4RUJBTUNBWVl3RHdZRFZSMFRBUUgvQkFVd0F3RUIvekFkQmdOVkhRNEVGZ1FVckxIOHorbnhZSm5aCjdLdGJWYWRDc1Y4V0toNHdFQVlKS3dZQkJBR0NOeFVCQkFNQ0FRQXdEUVlKS29aSWh2Y05BUUVGQlFBRGdnRUIKQUFNZm9TSnFTNkpiVEgxaklmMFNCVWRYZkxWa0RWWEh3bEZ0WWFRV3B3NndSR3FNSGJkRXZhWkdHZzl5NVVXdApINlZyNEpSOEtHWlkxeGpUK0Q4WkM5dDNWQ3V5R1ZZbE1WdEU4UkE0em9zYUVDODc3UDJjOFRFb0RuaVV0TG5mCkZpZmtweEZsbllXWDRTL2sxcDJRK2VsSVVwc1MrNjNFeC9wSFlqbGVCZEl2TmVZbHREZXlZMnRKMkl2VFR4bXYKZzR1MWhITjhJNTZiQ3JtbE95MUdrTm9aVCsxVUFuV250VWVnWCs0dnVZdXYzTDU4MHBET3FmeEtpNmljZWROUAplVDJtYWE2aXVxK1dJbW5zNjRtR3NHdlc3c2RrQWVvOEFMTVlsaHV5ODZPSkZjK2o4djZocTNabHI4cGEyU1VaCkU3Y1krbldlcm9HR2QzT1VIdzM5ckdnPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
Get the IP address of the control plane node (192.168.100.101
in this example):
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
vsphere-mgmt-control-plane-h5q99 Ready master 17h v1.18.2+vmware.1 192.168.100.101 192.168.100.101 VMware Photon OS/Linux 4.19.115-3.ph3 containerd://1.3.4
vsphere-mgmt-md-0-5b75dfc9cc-gdgp2 Ready <none> 17h v1.18.2+vmware.1 192.168.100.102 192.168.100.102 VMware Photon OS/Linux 4.19.115-3.ph3 containerd://1.3.4
The TKG extensions include some base-level configuration files for deploying Dex and Gangway. We’ll need to make some changes to them so that they are applicable to our environment. Keep in mind that I have extracted the extensions bundle to the /home/ubuntu
folder on my Linux VM.
Replace <MGMT_CLUSTER_IP1>
with the IP address of the control plane node and set <MGMT_CLUSTER_IP2>
to the load balancer address (we’ll use 10.40.14.0
) in the 02-certs-selfsigned.yaml
file:
sed -i 's/<MGMT_CLUSTER_IP1>/192.168.100.101/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/02-certs-selfsigned.yaml
sed -i '/<MGMT_CLUSTER_IP2>/10.40.14.0/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/02-certs-selfsigned.yaml
Add a line for the Dex FQDN (tkg-dex.corp.local) in the 02-certs-selfsigned.yaml
file:
sed -i '29 i\ - tkg-dex.corp.local' ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/02-certs-selfsigned.yaml
Replace <LDAP_HOST>
with the FQDN of the domain controller (controlcenter.corp.local) in the 03-cm.yaml
file:
sed -i 's/<LDAP_HOST>/controlcenter.corp.local/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
Replace <MGMT_CLUSTER_IP> with the Dex FQDN (tkg-dex.corp.local) and port 30167 with port 5556 in the 03-cm.yaml
file:
sed -i 's/<MGMT_CLUSTER_IP>:30167/tkg-dex.corp.local:5556/' ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
Set the rootCAData
value in the 03-cm.yaml
file (use the base64 encoded controlcenter-ca.cer
contents with no line breaks generated earlier):
sed -i 's/# rootCAData: ( base64 encoded PEM file )/rootCAData:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURiVENDQWxXZ0F3SUJBZ0lRZHV3Z0J1TDZOcHhFSkxaTkpZN1pHekFOQmdrcWhraUc5dzBCQVFVRkFEQkkKTVJVd0V3WUtDWkltaVpQeUxHUUJHUllGYkc5allXd3hGREFTQmdvSmtpYUprL0lzWkFFWkZnUmpiM0p3TVJrdwpGd1lEVlFRREV4QkRUMDVVVWs5TVEwVk9WRVZTTFVOQk1DQVhEVEUwTURNd05qRTNNamcxTVZvWUR6SXdOalF3Ck16QTJNVGN6T0RVd1dqQklNUlV3RXdZS0NaSW1pWlB5TEdRQkdSWUZiRzlqWVd3eEZEQVNCZ29Ka2lhSmsvSXMKWkFFWkZnUmpiM0p3TVJrd0Z3WURWUVFERXhCRFQwNVVVazlNUTBWT1ZFVlNMVU5CTUlJQklqQU5CZ2txaGtpRwo5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBclcyTTJ5SUNXbjFJUE5zekxCcXFWaldabGtwMlR0cTJ4eFlnCkM2YTRzd0lTaW9Dc2R2cnhFT1Azd1pmRFlKRjFMUmpaYXc5THJxa1lRdExRQW9JR0JDRHJRM1RWdlpVK1BtdEMKYWtReXpZb2U2K0hqYjNoZmM3RkJYanFmckVZM0RNc2V1NjlDTWVYcVVXNHZoeVJ2Y3RRYURkYmozWjkxKzV2ZApJa3hxMFB3c3lpQmFIcjhhNkg4RWd5WE9ZbDFJMmhwSTRWaGczaitoMzNmVWJaZVpIV20vc2Y4bGRmalRmbVVJCnBFdnlnenRMNEFuVHQ1VVFiNVNYRDBkc2lyZS93Tmg3c3lZUFFvTW5PL0hqSFlTSDhFcFFsZkVnYWRmV0NIdU0KYWkxdmc4NTNSRGNuakpFZ3l3eEFNTnE0WEUrU1JBRUN2TjZWS2xkUk9WU1AzMnFxaFFJREFRQUJvMUV3VHpBTApCZ05WSFE4RUJBTUNBWVl3RHdZRFZSMFRBUUgvQkFVd0F3RUIvekFkQmdOVkhRNEVGZ1FVckxIOHorbnhZSm5aCjdLdGJWYWRDc1Y4V0toNHdFQVlKS3dZQkJBR0NOeFVCQkFNQ0FRQXdEUVlKS29aSWh2Y05BUUVGQlFBRGdnRUIKQUFNZm9TSnFTNkpiVEgxaklmMFNCVWRYZkxWa0RWWEh3bEZ0WWFRV3B3NndSR3FNSGJkRXZhWkdHZzl5NVVXdApINlZyNEpSOEtHWlkxeGpUK0Q4WkM5dDNWQ3V5R1ZZbE1WdEU4UkE0em9zYUVDODc3UDJjOFRFb0RuaVV0TG5mCkZpZmtweEZsbllXWDRTL2sxcDJRK2VsSVVwc1MrNjNFeC9wSFlqbGVCZEl2TmVZbHREZXlZMnRKMkl2VFR4bXYKZzR1MWhITjhJNTZiQ3JtbE95MUdrTm9aVCsxVUFuV250VWVnWCs0dnVZdXYzTDU4MHBET3FmeEtpNmljZWROUAplVDJtYWE2aXVxK1dJbW5zNjRtR3NHdlc3c2RrQWVvOEFMTVlsaHV5ODZPSkZjK2o4djZocTNabHI4cGEyU1VaCkU3Y1krbldlcm9HR2QzT1VIdzM5ckdnPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
Note: If you find that you later have issues with communication between Dex and AD, you might look for any signs of certificate issues in the Dex logs. A quick way around this is to set the insecureSkipVerify
value to true
in the 03-cm.yaml
file.
Update the AD parameters in the 03-cm.yaml
file. I’m setting administrator@corp.local as the account to be used for querying AD (and it’s password), defining the OUs for user and group queries, and instructing Dex on how to format queries against AD:
sed -i 's/# bindDN: uid=serviceaccount,cn=users,dc=vmware,dc=com/bindDN: cn=administrator,cn=users,dc=corp,dc=local/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i 's/# bindPW: password/bindPW: VMware1!/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i 's/ou=people,dc=vmware,dc=com/cn=users,dc=corp,dc=local/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i 's/posixAccount/person/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i 's/username: uid/username: userPrincipalName/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i 's/idAttr: uid/idAttr: DN/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i 's/emailAttr: mail/emailAttr: userPrincipalName/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i 's/givenName/cn/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i 's/ou=group,dc=vmware,dc=com/dc=corp,dc=local/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i 's/posixGroup/group/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i 's/userAttr: uid/userAttr: DN/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i 's/memberUid/"member:1.2.840.113556.1.4.1941:"/' tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
Change the dexsvc service to type LoadBalancer and specify an IP address (10.40.14.0) in the 06-service.yaml
file:
sed -i 's/NodePort/LoadBalancer/' ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/06-service.yaml sed -i '10 i\ loadBalancerIP: 10.40.14.0' ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/06-service.yaml
We’re just about ready to deploy Dex. Verify that context is set to the mgmt cluster:
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* tkg-vsphere-mgmt-admin@tkg-vsphere-mgmt tkg-vsphere-mgmt tkg-vsphere-mgmt-admin
Create the Dex components:
kubectl apply -f tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/01-namespace.yaml
namespace/tanzu-system-auth created
kubectl apply -f tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/02-certs-selfsigned.yaml
issuer.cert-manager.io/dex-ca-issuer created
certificate.cert-manager.io/dex-cert created
kubectl apply -f tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
configmap/dex created
kubectl apply -f tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/04-rbac.yaml
serviceaccount/dex created
clusterrole.rbac.authorization.k8s.io/dex created
clusterrolebinding.rbac.authorization.k8s.io/dex created
kubectl apply -f tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/05-deployment.yaml
deployment.apps/dex created
kubectl apply -f tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/06-service.yaml
service/dexsvc created
Export the Dex certificate as we’ll need it later:
kubectl -n tanzu-system-auth get secret dex-cert-tls -o 'go-template={{ index .data "ca.crt" }}' | base64 -d > dex-ca.crt
We need to use an OIDC plan for our workload cluster that doesn’t exist yet so we’ll have to copy the OIDC cluster template file to the providers directory:
cp tkg-extensions-v1.1.0/authentication/dex/vsphere/cluster-template-oidc.yaml .tkg/providers/infrastructure-vsphere/v0.6.4/
Set the appropriate environment variables (192.168.100.101
is the IP address of the control plane node in the management cluster):
export OIDC_ISSUER_URL=https://192.168.100.101:30167
export OIDC_USERNAME_CLAIM=email
export OIDC_GROUPS_CLAIM=groups
export DEX_CA=$(kubectl get secret dex-cert-tls -n tanzu-system-auth -o 'go-template={{ index .data "ca.crt" }}' | base64 -d | gzip | base64)
Now we’re ready to create the workload cluster:
tkg create cluster dextest-vsphere -p oidc -c 1 -w 1
tkg get credentials dextest-vsphere
Credentials of workload cluster 'dextest-vsphere' have been saved
You can now access the cluster by switching the context to 'dextest-vsphere-admin@dextest-vsphere'
kubectl config use-context dextest-vsphere-admin@dextest-vsphere
Switched to context "dextest-vsphere-admin@dextest-vsphere".
Don’t forget to deploy MetalLB to the workload cluster and specify a different subnet than what was used in the management cluster. I used 10.40.14.0/27 in the management cluster and will use 10.40.14.32/27 in the workload cluster.
We need to install cert-manager to the workload cluster since it’s not there by default. Luckily, cert-manager is included in the extensions bundle:
kubectl apply -f tkg-extensions-v1.1.0/cert-manager/
Update the cluster-specific values in the 02-config.yaml
file (vsphere-test
is the cluster name, tkg-dex.corp.local
will be used in place of <MGMT_CLUSTER_IP>
, port 5556
will be used in place of port 30167
, tkg-gangway.corp.local
will be used in place of <WORKLOAD_CLUSTER_IP>
, port 30166
will be removed and 192.168.100.103
, the HA Proxy VM IP address in the workload cluster, will be used in place of <APISERVER_URL>
):
sed -i 's/<WORKLOAD_CLUSTER_NAME>/vsphere-test/g' ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/02-config.yaml
sed -i 's/<MGMT_CLUSTER_IP>:30167/tkg-dex.corp.local:5556/g' ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/02-config.yaml
sed -i 's/<WORKLOAD_CLUSTER_IP>:30166/tkg-gangway.corp.local/' ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/02-config.yaml
sed -i 's/<APISERVER_URL>/192.168.100.103/' ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/02-config.yaml
Create a clientSecret value and session key:
clientSecret=$(openssl rand -base64 32)
echo $clientSecret
oDjjNvy69iOImk16vYDB7yXmoxIpUmPedDxoDwP70hw=
echo -n "$clientSecret" | base64
b0Rqak52eTY5aU9JbWsxNnZZREI3eVhtb3hJcFVtUGVkRHhvRHdQNzBodz0=
Replace the sessionKey
value in the 03-secret.example
file with the one generated earlier (Note: you will need to backslash escape any /
characters in the sed command, this is necessary to prevent sed
from interpreting the /
as the end of the section:
sed -i 's/sesssionKey: zzzzz/sesssionKey: oDjjNvy69iOImk16vYDB7yXmoxIpUmPedDxoDwP70hw=/' ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/03-secret.example
Replace the clientSecret
value in the 03-secret.example
file with the one generated earlier (base64 encoded):
sed -i 's/clientSecret: zzzz/clientSecret: b0Rqak52eTY5aU9JbWsxNnZZREI3eVhtb3hJcFVtUGVkRHhvRHdQNzBodz0=/' ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/03-secret.example
Replace <WORKLOAD_CLUSTER_IP1>
with the IP address of the control plane node and set <WORKLOAD_CLUSTER_IP2>
to the load balancer address (we’ll use 10.40.14.32
) in the 04-cert-selfsigned.yaml
file:
sed -i 's/<WORKLOAD_CLUSTER_IP1>/192.168.100.104/' ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/04-cert-selfsigned.yaml
sed -i 's/<WORKLOAD_CLUSTER_IP2>/10.40.14.32/' ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/04-cert-selfsigned.yaml
Add a line for the Gangway FQDN (tkg-gangway.corp.local) in the 04-cert-selfsigned.yaml
file:
sed -i '29 i\ - tkg-gangway.corp.local' ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/04-cert-selfsigned.yaml
Change the gangwaysvc service to type LoadBalancer and specify an IP address (10.40.14.32) in the 06-service.yaml
file:
sed -i 's/NodePort/LoadBalancer/' ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/06-service.yaml
sed -i '10 i\ loadBalancerIP: 10.40.14.32' ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/06-service.yaml
Create the first few Gangway components:
kubectl apply -f ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/01-namespace.yaml namespace/tanzu-system-auth created
kubectl apply -f ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/02-config.yaml configmap/gangway created
kubectl apply -f ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/03-secret.example secret/gangway created
Create a configmap in the workload cluster from the exported Dex certificate:
kubectl create cm dex-ca -n tanzu-system-auth --from-file=dex-ca.crt=dex-ca.crt
Create the remaining Gangway components:
kubectl apply -f ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/04-cert-selfsigned.yaml
kubectl apply -f ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/05-deployment.yaml
kubectl apply -f ~/tkg-extensions-v1.1.0/authentication/gangway/vsphere/06-service.yaml
Add a staticClient entry for the Gangway instance (based on values from previous steps) to the tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
file after the staticClients: []
line and before the connectors:
line (Note: You need to remove the []
at the end of the staticClients
line). This will be used to update the Dex ConfigMap in a later step:
sed -i 's/staticClients: \[]/staticClients:/' ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i '23 i\ - id: vsphere-test' ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i '24 i\ redirectURIs:' ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i '25 i\ - 'https://tkg-gangway.corp.local/callback'' ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i '26 i\ name: 'vsphere-test'' ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i '27 i\ # echo -n '$clientSecret'' ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
sed -i '28 i\ secret: oDjjNvy69iOImk16vYDB7yXmoxIpUmPedDxoDwP70hw=' ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
Export the Gangway certificate:
kubectl -n tanzu-system-auth get secret gangway-cert-tls -o 'go-template={{ index .data "ca.crt" }}' | base64 -d > gangway-ca.crt
Switch your context back to the management cluster:
kubectl config use-context vsphere-mgmt-admin@vsphere-mgmt
Switched to context "vsphere-mgmt-admin@vsphere-mgmt".
Update the Dex configmap:
kubectl apply -f ~/tkg-extensions-v1.1.0/authentication/dex/vsphere/ldap/03-cm.yaml
configmap/dex configured
Restart the Dex pod (by deleting it):
kubectl -n tanzu-system-auth get po
NAME READY STATUS RESTARTS AGE
dex-7f9bbf769-c8cvv 1/1 Running 1 111m
kubectl -n tanzu-system-auth delete po dex-7f9bbf769-c8cvv
At this point, Dex and Gangway are configured but we need to do a little more work to be able to make use of them.
Create a context for an LDAP user on the workload cluster
Create DNS records mapping tkg-dex to 10.40.14.0 and tkg-gangway to 10.40.14.32.
Copy the ~/dex-ca.crt
and ~/gangway-ca.crt
files from the cli-vm VM to your Windows VM. Use the certmgr.msc
utility to import both to the Trusted Root Certificate Authorities store.
Open the Active Directory Users and Computers application and create a new group named tkg and a new user named tkguser. Add the tkguser user to the tkg group.
Open https://tkg-gangway.corp.local in a browser.
Click the Sign In button
Enter the UPN username (user@domain.suffix, tkguser@corp.local in this example) and password for a valid AD user.
Click the Login button
You will see a page similar to the following:
You can click the Download Kubeconfig button and use this file with the --kubeconfig=
switch when running kubectl
commands but using the commands noted on the TKG Authentication page results in a default kubeconfig file being created or updated. You can also change the name of the context being created in the kubectl config set-context command (tkguser is what was used in this example):
echo "-----BEGIN CERTIFICATE-----
MIICyzCCAbOgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
cm5ldGVzMB4XDTIwMDUyODIwMTMxNFoXDTMwMDUyNjIwMTgxNFowFTETMBEGA1UE
AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANlN
oJ55O4JdRMdZFCZFGeDYuQRYEVxFSxxhVFXhxESN3HRwsEKp+MbIP/y+jWE44+if
a1G2Ie7WCW4o8wkI/kVUCNZ5iLf7fYLUysbAuMpKvDYsf4/gcIQ5LjUUhxEfgUME
6v3yvhbV8xHS4LAIAOhCN0Xz8pn3I0nvgZzWRoRVIU3kr06MqpccZIw9TS+0dm48
1eA28w4KQ2i38Tzm+886Fi6XsxahIScdpB06wIZRDl10fgdpzh5uF9O0QP2qZxoc
q9P6gxnadzy5HzpNJho4pHWNXpCnsb9OoXV8HOyUoPtP24QxI9pUgRJdOZZJ+eqD
RVVCXzPzx3nVJ40AyEkCAwEAAaMmMCQwDgYDVR0PAQH/BAQDAgKkMBIGA1UdEwEB
/wQIMAYBAf8CAQAwDQYJKoZIhvcNAQELBQADggEBAAEKngihd/jbHA7SNNEqin9+
tZqBtmf3XUHGSAVzwBbitPrQWDCLeBzgzty3BVpcHRKTCyuzk3xsAMzHEKcYgHW0
aqK8qHPI/L869JWLoP7w4QrnGoM8gQx+6yChk6gxAFqQa5ttcufgZBJe7ViOgPaf
zERHZiO6X2Zj3vundIX9w9jLTMcH6Oo99osceIjU57KLtdFSBR4k0R4A7fEaJdZa
i8o/rr8K7dwnG7PIvL4uHOh/V3YSa9+FNLav3r7SyBOOVbEzROA8+/yAaaDKMyHU
mshnpqumAWkT/YppjKrO1R2t3zWxvRGIJTkwmbI+Y4h9duBBRHzhQTQgIZG/ROY=
-----END CERTIFICATE-----
" \ > ca-vsphere-test.pem
kubectl config set-cluster vsphere-test --server=https://192.168.100.103:6443/ --certificate-authority=ca-vsphere-test.pem --embed-certs
kubectl config set-credentials tkguser@corp.local@vsphere-test \
--auth-provider=oidc \
--auth-provider-arg='idp-issuer-url=https://tkg-dex.corp.local:5556' \
--auth-provider-arg='client-id=vsphere-test' \
--auth-provider-arg='client-secret=oDjjNvy69iOImk16vYDB7yXmoxIpUmPedDxoDwP70hw=' \
--auth-provider-arg='refresh-token=Chlqeml4NXNpc3Z5c3ZlamFleXZkc25mYnBrEhl4d2N5dXVvZXE3amQydXpmeGpwcDd4dmN6' \
--auth-provider-arg='id-token=eyJhbGciOiJSUzI1NiIsImtpZCI6IjQzMzQxMDhlOWMwMjVlN2Q5OWQ1NGJmZmUyZTE4MmRkOTU4ZjJhMTMifQ.eyJpc3MiOiJodHRwczovL3RrZy1kZXguY29ycC5sb2NhbDo1NTU2Iiwic3ViIjoiQ2lWRFRqMTBhMmNnZFhObGNpeERUajFWYzJWeWN5eEVRejFqYjNKd0xFUkRQV3h2WTJGc0VnUnNaR0Z3IiwiYXVkIjoidnNwaGVyZS10ZXN0IiwiZXhwIjoxNTkwNzkwMTQ2LCJpYXQiOjE1OTA3ODk4NDYsImF0X2hhc2giOiJqT1kwTGQ3LTlfZkZIa2V2b0JuRnR3IiwiZW1haWwiOiJ0a2d1c2VyQGNvcnAubG9jYWwiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwiZ3JvdXBzIjpbIkFkbWluaXN0cmF0b3JzIiwidGtnIl0sIm5hbWUiOiJ0a2cgdXNlciJ9.IVhlQio_QhGOdApBx2C7u6tYEn3o7HJsY2tViiuF3BhWU6F4CeNwiKsP1iyx9NF238m09StzLVAr4aDVeV0b1vFLngo9feR1ClDWS74MqEHUB6FYfEjFmkNGQAMouTO0myBU4uTjJTnoSGJtraMK7ykajKfhQ8mJr_IVlTTd8Q2MxP9Wg8o_b67N9xzl0Y6F9NVBt0otn7hp7u7OQ3czX4Q10_QxR-QYj0XzNig9Lurg7Q1lzDi7CDzNujk9eEwm9CqGUTKhKgvq7GOk-_InsVm5zlhwXqL2BsNo0aaOo-zmc_RXLhJBgqOQgpawt1xvlRyGou9GUUNJEwCsK6lA5A' \
--auth-provider-arg='idp-certificate-authority-data=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURKVENDQWcyZ0F3SUJBZ0lSQVBCZkkxMDVhc2NoOWZiRGJ6enNReTB3RFFZSktvWklodmNOQVFFTEJRQXcKSXpFUE1BMEdBMVVFQ2hNR2RtMTNZWEpsTVJBd0RnWURWUVFERXdkMGEyY3RaR1Y0TUI0WERUSXdNRFV5T0RJdwpNVEV6TUZvWERUSTVNRFV5TmpJd01URXpNRm93SXpFUE1BMEdBMVVFQ2hNR2RtMTNZWEpsTVJBd0RnWURWUVFECkV3ZDBhMmN0WkdWNE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBdWJybXVLVVEKSDZqZkRiREFxdVU1YUVYRUdaUnIvbkpHUWNzVTF2bldMYUg1WTRvVUFxRDIwYVgrUTBvT0tUN2lScWJESDFOagplWW9wSXJCS0czZHhiOTJVNkJLYm5yVWt5TGJBbkxnMTJtdzd4NEowdUNwaHpvOG4xenQxQWd5QVdRaG5rQzVSCk9mR052ZHJ5dk56MEZpbUVEVzdBTDdXRDRrSDJOUCt5WnNDUUF0ZWhmYVNuTmNLZ2pLTWFpUmhGN3J3SEpUMFkKOFdFTXBwTkdjeVpPcGRweEVad2dUbThHRk1uR1BGaEY4RmJNTnNCQkNNZ2pTeUVuTEtiWGZUY2hTUC9iN1hQVAp6NkIzcGowYXZ2R2FxQ3ZxRCtNWWMvcjd0ZXlsQ1pDU0dGQit5L3YyZnAvZG92dlUwdW5VVkV4MGlaTmdMOVVTCmlhenpsN1hQZ21oRVhRSURBUUFCbzFRd1VqQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0RBWURWUjBUQVFIL0JBSXcKQURBeUJnTlZIUkVFS3pBcGdnZDBhMmN0WkdWNGdoSjBhMmN0WkdWNExtTnZjbkF1Ykc5allXeUhCTUNvWkdXSApCQW9vRGdBd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFMakJwU2JaaVpQdTJPS2lnbnJNS2pVWkxKTTM2UWFMCm5XWFUwQkh2Qm1PbXV6Q3hZelV1d2RRZkFzWHVnd3cyQlprN1ZBNzM3VExTQy9YaDJHWVQvRXpyRHZVNXB1YVcKQm5aNlluRjhxZVA1QXpVcEtaYnArS3VjNlBiNDAvc1RScHFGanpuZU85ZnVkdmlFczNlcDJnZ2kxWUljRlJSNwpJUHhXeGsveVZTMkxZY0dvWmx1bUhpMzE1ZktpNWVCaVp3enhHaFZDTW92Q25MblRKcFU3dmt6cXhoaUtSc1R0CmJ3N2Y3VFZXQk5rUTdyTWJYeEY1SDdhZkhaaTV5THE3WFMrNmw2TU42Y25TaEVKUlg1enZWQUd3Y1Q0ajREN0EKa1hReVpIaUc1TTJ1SkszcjFFZDYzekJ2UU9WbWZrRVJxQ3drclM0aTJDT0xtaDZYaWVQMS9uQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=' \
kubectl config set-context tkguser --cluster=vsphere-test --user=tkguser@corp.local@vsphere-test
kubectl config use-context tkguser rm ca-vsphere-test.pem
Create a clusterrolebinding specification for an AD group that the AD user is a member of (tkg is the group and tkguser is the user in this example):
echo "
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: Administrators
subjects:
- kind: Group
name: Administrators
apiGroup: ""
roleRef:
kind: ClusterRole #this must be Role or ClusterRole
name: cluster-admin # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io" > tkgadmin-clusterrolebinding.yaml
Switch your context back to the workload cluster:
kubectl config use-context vsphere-test-admin@vsphere-test
Switched to context "vsphere-test-admin@vsphere-test".
Create the clusterrolebinding in the workload cluster:
kubectl create -f tkgadmin-clusterrolebinding.yaml
Switch your context to the tkguser context in the workload cluster and validate that the clusterrolebinding is allowing access:
kubectl config use-context tkguser
Switched to context "tkguser".
kubectl get nodes
NAME STATUS ROLES AGE VERSION
vsphere-test-control-plane-k55hq Ready master 25h v1.18.2+vmware.1
vsphere-test-md-0-78dc4b86-jflhn Ready <none> 25h v1.18.2+vmware.1
Pingback: How to Deploy an OIDC-Enabled Cluster on vSphere in TKG 1.2 – Little Stuff
Pingback: Installing Tanzu Kubernetes Grid 1.3 on vSphere with NSX Advanced Load Balancer – Little Stuff