TKG has had extensions included in a manifest bundle since the 1.0 version but with the 1.2 version they have been much more tightly integrated and there are more of them. You’ll notice when you download the TKG cli bundle that there are several other utilities included now. These are part of the Carvel open-source project and may be used when working with extensions.
ytt
– YAML templating toolkapp
– Kubernetes applications CLIkbld
– Kubernetes builder
Detailed instructions for configuring these binaries can be found at Configuring and Managing Tanzu Kubernetes Grid Shared Services but at a high level you’ll want to extract them all, configure them with a “short” name (remove everything after the first – character), make them executable if running Linux and copy them to a location in your path.
To start out, we can build a management cluster in a very similar fashion to how we’ve done it in the past. A couple of things worth mentioning here:
- The
--vsphere-controlplane-endpoint
value is mandatory and provides a stable Virtual IP (VIP) to be used when accessing the cluster via kubectl commands. This address used to be handled by HA Proxy but is now the responsibility of kube-vip, newly introduced in 1.2. - We can now choose which CNI to use. It used to be that Calico was the only choice but as you can see from the
--cni
parameter, we’re using Antrea for this one. Antrea is now the default but you can still use Calico if you prefer.
tkg init -i vsphere --vsphere-controlplane-endpoint-ip 192.168.110.101 -p dev --ceip-participation true --name tkg-mgmt --cni antrea --deploy-tkg-on-vSphere7 -v 6
Resuming the target cluster
Set Cluster.Spec.Paused Paused=false Cluster="tkg-mgmt" Namespace="tkg-system"
Context set for management cluster tkg-mgmt as 'tkg-mgmt-admin@tkg-mgmt'.
Deleting kind cluster: tkg-kind-btoi6ok09c6sg0nhg4q0
Management cluster created!
You can now create your first workload cluster by running the following:
tkg create cluster [name] --kubernetes-version=[version] --plan=[plan]
tkg get mc
MANAGEMENT-CLUSTER-NAME CONTEXT-NAME STATUS
tkg-mgmt * tkg-mgmt-admin@tkg-mgmt Success
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* tkg-mgmt-admin@tkg-mgmt tkg-mgmt tkg-mgmt-admin
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
tkg-mgmt-control-plane-snh9c Ready master 10m v1.19.1+vmware.2 192.168.100.100 192.168.100.100 VMware Photon OS/Linux 4.19.145-2.ph3 containerd://1.3.4
tkg-mgmt-md-0-6f86f6876f-s8g97 Ready <none> 7m30s v1.19.1+vmware.2 192.168.100.101 192.168.100.101 VMware Photon OS/Linux 4.19.145-2.ph3 containerd://1.3.4
Now that our management cluster is up and running and we’ve successfully connected to it, we can deploy MetalLB to provide LoadBalancer services, per the steps noted in my previous blog post, How to Deploy MetalLB with BGP in a Tanzu Kubernetes Grid 1.1 Cluster.
Now we’re ready to create a workload cluster:
tkg create cluster tkg-wld -p dev --vsphere-controlplane-endpoint-ip 192.168.110.102
tkg get credentials tkg-wld
Credentials of workload cluster 'tkg-wld' have been saved
You can now access the cluster by running 'kubectl config use-context tkg-wld-admin@tkg-wld'
Before we switch contexts to the new cluster, we need to label this workload cluster as a “shared services” cluster.
kubectl label cluster tkg-wld cluster-role.tkg.tanzu.vmware.com/tanzu-services="" --overwrite=true
cluster.cluster.x-k8s.io/tkg-wld labeled
And you should now see that this cluster has the “tanzu-services” role assigned to it.
tkg get cluster --include-management-cluster
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
tkg-wld default running 1/1 1/1 v1.19.1+vmware.2 tanzu-services
tkg-vsphere-mgmt tkg-system running 1/1 1/1 v1.19.1+vmware.2 management
Now we can switch over to our workload cluster.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* tkg-mgmt-admin@tkg-mgmt tkg-mgmt tkg-mgmt-admin
tkg-wld-admin@tkg-wld tkg-wld tkg-wld-admin
kubectl config use-context tkg-wld-admin@tkg-wld
Switched to context "tkg-wld-admin@tkg-wld".
And just as with the management cluster, you’ll want to deploy MetalLB here as well. Be sure to use a unique address range and update the BGP peering as appropriate (if you’re using BGP).
Download and extract the extensions bundle file. You should end up with a tkg-extensions-v1.2.0+vmware.1 directory whose contents should look like:
authentication
bom
cert-manager
common
extensions
ingress
logging
monitoring
registry
We’ll need to install the cert-manager, the TMC extension manager and the Kapp controller. While cert-manager has been around in TKG since the 1.0 version, the TMC extension manager and the Kapp controller are new components in 1.2. Don’t let the TMC-branded extension manager name make you think that we’re installing TMC here…TKG and TMC simply both use the same extension manager. The TMC extension manager is new tool for managing extensions/add-ons to our TKG clusters and the Kapp controller is part of the previously mentioned Carvel open-source tools that allows for an easy means of controlling which apps are running in our cluster. You can read more about Kapp controller at https://github.com/k14s/kapp-controller.
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/cert-manager/
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
Warning: admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
Warning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/tmc-extension-manager.yaml
namespace/vmware-system-tmc created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/agents.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensions.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionresourceowners.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionintegrations.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionconfigs.intents.tmc.cloud.vmware.com created
serviceaccount/extension-manager created
clusterrole.rbac.authorization.k8s.io/extension-manager-role created
clusterrolebinding.rbac.authorization.k8s.io/extension-manager-rolebinding created
service/extension-manager-service created
deployment.apps/extension-manager created
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/kapp-controller.yaml
serviceaccount/kapp-controller-sa created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/apps.kappctrl.k14s.io created
deployment.apps/kapp-controller created
clusterrole.rbac.authorization.k8s.io/kapp-controller-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/kapp-controller-cluster-role-binding created
We can validate that cert-manager, extension manager and the Kapp controller ares working by checking the pods that are present in the cert-manager and vmware-system-tmc namespaces:
kubectl -n cert-manager get po
NAME READY STATUS RESTARTS AGE
cert-manager-69877b5f94-k8bk8 1/1 Running 0 155m
cert-manager-cainjector-577b45fb7c-h8zcr 1/1 Running 2 14h
cert-manager-webhook-55c5cd4dcb-nxb8q 1/1 Running 0 14h
kubectl -n vmware-system-tmc get po
NAME READY STATUS RESTARTS AGE
extension-manager-67645f55d8-d2vws 1/1 Running 0 14h
kapp-controller-cd55bbd6b-v87cg 1/1 Running 0 14h
With the building blocks for our extensions installed, we’re ready to actually get started. If you take a look in the tkg-extensions-v1.2.0+vmware.1/extensions
folder, you’ll see that there are a number of items that we could configure:
extensions/logging/fluent-bit/fluent-bit-extension.yaml
extensions/monitoring/grafana/grafana-extension.yaml
extensions/monitoring/prometheus/prometheus-extension.yaml
extensions/registry/harbor/harbor-extension.yaml
extensions/authentication/gangway/gangway-extension.yaml
extensions/authentication/dex/dex-extension.yaml
extensions/ingress/contour/contour-extension.yaml
I’m most excited to use this new extension mechanism to install Harbor. You can see from my earlier post, How to install Harbor 2.0 on a Photon OS VM, that installing Harbor can be an involved process. There are definitely easier ways to do it but we now have a means that is much more tightly integrated with TKG itself.
Harbor in TKG 1.2 requires an ingress for access, so we need to deploy an ingress controller first. Luckily, TKG 1.2 ships with Contour and it’s part of the new extension framework.
The first thing we’ll do is to create the tanzu-system-ingress namespace and related role objects needed before we actually deploy Contour. This is all contained in a single yaml file so most of the work is done for us already.
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/ingress/contour/namespace-role.yaml
namespace/tanzu-system-ingress created
serviceaccount/contour-extension-sa created
role.rbac.authorization.k8s.io/contour-extension-role created
rolebinding.rbac.authorization.k8s.io/contour-extension-rolebinding created
clusterrole.rbac.authorization.k8s.io/contour-extension-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/contour-extension-cluster-rolebinding created
There is a sample file for fine-tuning the Contour configuration at tkg-extensions-v1.2.0+vmware.1/extensions/ingress/contour/vsphere/contour-data-values.yaml.example
. You can see that this file is essentially empty:
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
contour:
image:
repository: registry.tkg.vmware.run
envoy:
image:
repository: registry.tkg.vmware.run
We’ll create a file similar to this one but with some extra information in it…primarily so that we can allow our Envoy pods to be exposed via a LoadBalancer service instead of a NodePort service:
cat > contour-data-values.yaml <<EOF
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
contour:
image:
repository: registry.tkg.vmware.run
envoy:
image:
repository: registry.tkg.vmware.run
service:
type: LoadBalancer
EOF
Now we can create a secret in the tanzu-system-ingres
s namespace based on the data in this yaml file:
kubectl create secret generic contour-data-values --from-file=values.yaml=contour-data-values.yaml -n tanzu-system-ingress
secret/contour-data-values created
We can validate that the newly created secret has our custom values in it:
kubectl -n tanzu-system-ingress get secrets contour-data-values -o 'go-template={{ index .data "values.yaml"}}' | base64 -d
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
contour:
image:
repository: registry.tkg.vmware.run
envoy:
image:
repository: registry.tkg.vmware.run
service:
type: LoadBalancer
Now we can apply the Contour extension. You can examine the contents of the yaml file that we’ll be using but we won’t be making any changes to it:
apiVersion: clusters.tmc.cloud.vmware.com/v1alpha1
kind: Extension
metadata:
name: contour
namespace: tanzu-system-ingress
annotations:
tmc.cloud.vmware.com/managed: "false"
spec:
description: contour
version: "v1.8.1_vmware.1"
name: contour
namespace: tanzu-system-ingress
deploymentStrategy:
type: KUBERNETES_NATIVE
objects: |
apiVersion: kappctrl.k14s.io/v1alpha1
kind: App
metadata:
name: contour
annotations:
tmc.cloud.vmware.com/orphan-resource: "true"
spec:
syncPeriod: 5m
serviceAccountName: contour-extension-sa
fetch:
- image:
url: registry.tkg.vmware.run/tkg-extensions-templates:v1.2.0_vmware.1
template:
- ytt:
ignoreUnknownComments: true
paths:
- tkg-extensions/common
- tkg-extensions/ingress/contour
inline:
pathsFrom:
- secretRef:
name: contour-data-values
deploy:
- kapp:
rawOptions: ["--wait-timeout=5m"]
You should take note from this yaml file that what we’re deploying is an “extension”, a custom resource definition (CRD) that was created when we deployed the TMC extension manager.
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/ingress/contour/contour-extension.yaml
extension.clusters.tmc.cloud.vmware.com/contour configured
After a few seconds and maybe up to a minute later, you should see numerous resources in the tanzu-system-ingress
namespace:
kubectl -n tanzu-system-ingress get all
NAME READY STATUS RESTARTS AGE
pod/contour-794785995b-85slg 0/1 ErrImagePull 0 14s
pod/contour-794785995b-brw76 0/1 ImagePullBackOff 0 25s
pod/envoy-7v56m 0/2 Init:ErrImagePull 0 3s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/contour ClusterIP 100.70.42.53 <none> 8001/TCP 2m46s
service/envoy LoadBalancer 100.66.70.168 10.40.14.32 80:31544/TCP,443:30602/TCP 2m43s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/envoy 1 1 0 1 0 <none> 2m46s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/contour 0/2 2 0 2m43s
NAME DESIRED CURRENT READY AGE
replicaset.apps/contour-794785995b 2 2 0 2m43s
You can see that the envoy service is a LoadBalancer and has the IP address of 10.40.14.32. We’ll be using that later when accessing Harbor.
We can also query for extensions in our cluster and see that the contour extension is installed:
kubectl -n tanzu-system-ingress get extensions
NAME STATE HEALTH VERSION
contour 3
The last check we can do is against the contour “app”, another CRD:
kubectl -n tanzu-system-ingress get app contour
NAME DESCRIPTION SINCE-DEPLOY AGE
contour Reconcile succeeded 18s 23m
And running the same command but adding -o yaml
we can see even more detail (output is somewhat truncated):
inspect:
exitCode: 0
stdout: |-
Target cluster 'https://100.64.0.1:443'
Resources in app 'contour-ctrl'
Namespace Name Kind Owner Conds. Rs Ri Age
(cluster) contour ClusterRole kapp - ok - 28m
^ contour ClusterRoleBinding kapp - ok - 28m
^ httpproxies.projectcontour.io CustomResourceDefinition kapp 2/2 t ok - 28m
^ tanzu-system-ingress Namespace kapp - ok - 58m
^ tlscertificatedelegations.projectcontour.io CustomResourceDefinition kapp 2/2 t ok - 28m
tanzu-system-ingress contour Deployment kapp 2/2 t ok - 28m
^ contour Endpoints cluster - ok - 28m
^ contour Service kapp - ok - 28m
^ contour ServiceAccount kapp - ok - 28m
^ contour-77b7d7f546 ReplicaSet cluster - ok - 22m
^ contour-77b7d7f546-px47w Pod cluster 4/4 t ok - 22m
^ contour-77b7d7f546-zjdpc Pod cluster 4/4 t ok - 22m
^ contour-794785995b ReplicaSet cluster - ok - 28m
^ contour-ca Certificate kapp 1/1 t ok - 28m
^ contour-ca-issuer Issuer kapp 1/1 t ok - 28m
^ contour-ca-t8prc CertificateRequest cluster 1/1 t ok - 28m
^ contour-cert Certificate kapp 1/1 t ok - 28m
^ contour-cert-8l72d CertificateRequest cluster 1/1 t ok - 28m
^ contour-selfsigned-ca-issuer Issuer kapp 1/1 t ok - 28m
^ contour-ver-1 ConfigMap kapp - ok - 28m
^ envoy DaemonSet kapp - ok - 28m
^ envoy Endpoints cluster - ok - 28m
^ envoy Service kapp - ok - 28m
^ envoy ServiceAccount kapp - ok - 28m
^ envoy-6587f4b7f9 ControllerRevision cluster - ok - 22m
^ envoy-87b769bb7 ControllerRevision cluster - ok - 28m
^ envoy-cert Certificate kapp 1/1 t ok - 28m
^ envoy-cert-p4bdc CertificateRequest cluster 1/1 t ok - 28m
^ envoy-pmmc9 Pod cluster 4/4 t ok - 22m
Rs: Reconcile state
Ri: Reconcile information
29 resources
Succeeded
With Contour out of the way, we can now move on to deploying Harbor. You’ll notice that the process is fairly similar as they both make use of the new extension framework.
As with Contour, the first step is to create a new namespace and related role objects. For Harbor, the namespace is tanzu-system-registry.
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/registry/harbor/namespace-role.yaml
namespace/tanzu-system-registry created
serviceaccount/harbor-extension-sa created
role.rbac.authorization.k8s.io/harbor-extension-role created
rolebinding.rbac.authorization.k8s.io/harbor-extension-rolebinding created
clusterrole.rbac.authorization.k8s.io/harbor-extension-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/harbor-extension-cluster-rolebinding created
There is a harbor-data-values.yaml.example
file that has a fair amount of information it (unlike the contour-data-values.yaml.example
file) but we don’t need to change anything in here and can use it as-is. I’d post the contents here but it’s a couple hundred lines long. We’ll make a copy of the example file and use it in conjunction with the generate-passwords.sh script
cp tkg-extensions-v1.2.0+vmware.1/extensions/registry/harbor/harbor-data-values.yaml.example harbor-data-values.yaml
bash tkg-extensions-v1.2.0+vmware.1/extensions/registry/harbor/generate-passwords.sh harbor-data-values.yaml
Successfully generated random passwords and secrets in harbor-data-values.yaml
There are a number of fields that get set in here but the two that we are most concerned with are the harborAdminPassword
and hostname
fields.
egrep "harborAdminPassword|hostname" harbor-data-values.yaml
hostname: core.harbor.domain
harborAdminPassword: QEwQf536w5SCI9vE
We’ll want to set the password to something a little easier to use and set the hostname as appropriate. For this lab, the hostname should be harbor.corp.tanzu and the password will be VMware1!.
sed -i 's/core.harbor.domain/harbor.corp.tanzu/' harbor-data-values.yaml
sed -i 's/^harborAdminPassword.*/harborAdminPassword: VMware1!/' harbor-data-values.yaml
This is a good a time as any to create a DNS record for harbor.corp.tanzu. It will need to map to 10.40.14.32 since we’ll be accessing it via an ingress and we know that the LoadBalancer IP address for the envoy service is 10.40.14.32.
Now we get to actually depoying Harbor. First, we’ll use the harbor-data-values.yaml file to create a secret with all of the configuration information that Harbor will need.
kubectl -n tanzu-system-registry create secret generic harbor-data-values --from-file=harbor-data-values.yaml
secret/harbor-data-values created
You can inspect the secret that was created in the same fashion that was done for the contour-data-values
secret but there is a lot more output (output is severely truncated):
kubectl -n tanzu-system-registry get secrets harbor-data-values -o 'go-template={{ index .data "harbor-data-values.yaml"}}' | base64 -d
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
# Docker images setting
image:
repository: projects-stg.registry.vmware.com/harbor
tag: v2.0.2_vmware.1
pullPolicy: IfNotPresent
# The namespace to install Harbor
namespace: tanzu-system-registry
# The FQDN for accessing Harbor admin UI and Registry service.
hostname: harbor.corp.tanzu
# The network port of the Envoy service in Contour or other Ingress Controller.
port:
https: 443
# [Optional] The certificate for the ingress if you want to use your own TLS certificate.
# We will issue the certificate by cert-manager when it's empty.
tlsCertificate:
# [Required] the certificate
tls.crt:
# [Required] the private key
tls.key:
# [Optional] the certificate of CA, this enables the download
# link on portal to download the certificate of CA
ca.crt:
# Use contour http proxy instead of the ingress when it's true
enableContourHttpProxy: true
# [Required] The initial password of Harbor admin.
harborAdminPassword: VMware1!
And now we can install the Harbor extension itself. We don’t need to make any changes to the supplied yaml file but you can take a look at it to better understand what will be deployed.
apiVersion: clusters.tmc.cloud.vmware.com/v1alpha1
kind: Extension
metadata:
name: harbor
namespace: tanzu-system-registry
annotations:
tmc.cloud.vmware.com/managed: "false"
spec:
description: harbor
version: "v2.0.2_vmware.1"
name: harbor
namespace: tanzu-system-registry
deploymentStrategy:
type: KUBERNETES_NATIVE
objects: |
apiVersion: kappctrl.k14s.io/v1alpha1
kind: App
metadata:
name: harbor
annotations:
tmc.cloud.vmware.com/orphan-resource: "true"
spec:
syncPeriod: 5m
serviceAccountName: harbor-extension-sa
fetch:
- image:
url: registry.tkg.vmware.run/tkg-extensions-templates:v1.2.0_vmware.1
template:
- ytt:
ignoreUnknownComments: true
paths:
- tkg-extensions/common
- tkg-extensions/registry/harbor
inline:
pathsFrom:
- secretRef:
name: harbor-data-values
deploy:
- kapp:
rawOptions: ["--wait-timeout=5m"]
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/registry/harbor/harbor-extension.yaml
extension.clusters.tmc.cloud.vmware.com/harbor created
There will be loads of resources getting created in the tanzu-system-registry
namespace:
kubectl -n tanzu-system-registry get all
NAME READY STATUS RESTARTS AGE
pod/harbor-clair-55d7f77cbb-h2kbx 2/2 Running 4 5m41s
pod/harbor-core-65c8645b68-gzcpx 1/1 Running 2 5m41s
pod/harbor-database-0 1/1 Running 0 5m40s
pod/harbor-jobservice-659db5485-x9mmf 1/1 Running 1 5m40s
pod/harbor-notary-server-7b89b45986-87pgx 1/1 Running 3 5m40s
pod/harbor-notary-signer-58b78f6886-6m2nt 1/1 Running 3 5m40s
pod/harbor-portal-7d54fd646d-vr8gf 1/1 Running 0 5m40s
pod/harbor-redis-0 1/1 Running 0 5m39s
pod/harbor-registry-785b5fc7f7-s8czt 2/2 Running 0 5m39s
pod/harbor-trivy-0 1/1 Running 0 5m39s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/harbor-clair ClusterIP 100.69.234.89 <none> 8443/TCP 5m41s
service/harbor-core ClusterIP 100.64.168.168 <none> 443/TCP 5m40s
service/harbor-database ClusterIP 100.69.2.22 <none> 5432/TCP 5m40s
service/harbor-jobservice ClusterIP 100.71.158.49 <none> 443/TCP 5m40s
service/harbor-notary-server ClusterIP 100.68.194.185 <none> 4443/TCP 5m40s
service/harbor-notary-signer ClusterIP 100.66.228.111 <none> 7899/TCP 5m40s
service/harbor-portal ClusterIP 100.68.155.170 <none> 443/TCP 5m39s
service/harbor-redis ClusterIP 100.69.227.115 <none> 6379/TCP 5m39s
service/harbor-registry ClusterIP 100.64.91.241 <none> 5443/TCP,8443/TCP 5m38s
service/harbor-trivy ClusterIP 100.68.99.181 <none> 8443/TCP 5m43s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/harbor-clair 1/1 1 1 5m41s
deployment.apps/harbor-core 1/1 1 1 5m41s
deployment.apps/harbor-jobservice 1/1 1 1 5m40s
deployment.apps/harbor-notary-server 1/1 1 1 5m40s
deployment.apps/harbor-notary-signer 1/1 1 1 5m40s
deployment.apps/harbor-portal 1/1 1 1 5m40s
deployment.apps/harbor-registry 1/1 1 1 5m40s
NAME DESIRED CURRENT READY AGE
replicaset.apps/harbor-clair-55d7f77cbb 1 1 1 5m41s
replicaset.apps/harbor-core-65c8645b68 1 1 1 5m41s
replicaset.apps/harbor-jobservice-659db5485 1 1 1 5m40s
replicaset.apps/harbor-notary-server-7b89b45986 1 1 1 5m40s
replicaset.apps/harbor-notary-signer-58b78f6886 1 1 1 5m40s
replicaset.apps/harbor-portal-7d54fd646d 1 1 1 5m40s
replicaset.apps/harbor-registry-785b5fc7f7 1 1 1 5m39s
NAME READY AGE
statefulset.apps/harbor-database 1/1 5m40s
statefulset.apps/harbor-redis 1/1 5m40s
statefulset.apps/harbor-trivy 1/1 5m39s
There are also a few persistentvolumeclaims and persistentvolumes that are created to support both the Harbor database and image store.
kubectl -n tanzu-system-registry get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-8ac807bc-ab13-43e6-ac0d-743cbed818ce 1Gi RWO Delete Bound tanzu-system-registry/data-harbor-redis-0 standard 22m
persistentvolume/pvc-9dcfc990-a8b8-4b0a-b3cf-a2c71260bb46 10Gi RWO Delete Bound tanzu-system-registry/harbor-registry standard 23m
persistentvolume/pvc-c2d501ec-cd89-4faf-adce-51e5bf463a5c 5Gi RWO Delete Bound tanzu-system-registry/data-harbor-trivy-0 standard 22m
persistentvolume/pvc-e6a941dd-6b60-4192-9595-6cba5ab49bf7 1Gi RWO Delete Bound tanzu-system-registry/harbor-jobservice standard 23m
persistentvolume/pvc-ee83206e-4e52-4751-ae46-149137f8bd57 1Gi RWO Delete Bound tanzu-system-registry/database-data-harbor-database-0 standard 22m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/data-harbor-redis-0 Bound pvc-8ac807bc-ab13-43e6-ac0d-743cbed818ce 1Gi RWO standard 23m
persistentvolumeclaim/data-harbor-trivy-0 Bound pvc-c2d501ec-cd89-4faf-adce-51e5bf463a5c 5Gi RWO standard 23m
persistentvolumeclaim/database-data-harbor-database-0 Bound pvc-ee83206e-4e52-4751-ae46-149137f8bd57 1Gi RWO standard 23m
persistentvolumeclaim/harbor-jobservice Bound pvc-e6a941dd-6b60-4192-9595-6cba5ab49bf7 1Gi RWO standard 23m
persistentvolumeclaim/harbor-registry Bound pvc-9dcfc990-a8b8-4b0a-b3cf-a2c71260bb46 10Gi RWO standard 23m
As with Contour, we can look deeper into the deployed extension and app to get a better understanding of what has been deployed and its current state.
kubectl -n tanzu-system-registry get extensions
NAME STATE HEALTH VERSION
harbor 3
kubectl -n tanzu-system-registry get app harbor
NAME DESCRIPTION SINCE-DEPLOY AGE
harbor Reconcile succeeded 4m30s 25m
kubectl -n tanzu-system-registry get app harbor -o yaml
inspect:
exitCode: 0
stdout: |-
Target cluster 'https://100.64.0.1:443'
Resources in app 'harbor-ctrl'
Namespace Name Kind Owner Conds. Rs Ri Age
(cluster) tanzu-system-registry Namespace kapp - ok - 1h
tanzu-system-registry data-harbor-redis-0 PersistentVolumeClaim cluster - ok - 21m
^ data-harbor-trivy-0 PersistentVolumeClaim cluster - ok - 21m
^ database-data-harbor-database-0 PersistentVolumeClaim cluster - ok - 21m
^ harbor-ca Certificate kapp 1/1 t ok - 21m
^ harbor-ca-issuer Issuer kapp 1/1 t ok - 21m
^ harbor-clair Deployment kapp 2/2 t ok - 21m
^ harbor-clair Endpoints cluster - ok - 21m
^ harbor-clair Service kapp - ok - 21m
^ harbor-clair-55d7f77cbb ReplicaSet cluster - ok - 21m
^ harbor-clair-55d7f77cbb-h2kbx Pod cluster 4/4 t ok - 21m
^ harbor-clair-internal-cert Certificate kapp 1/1 t ok - 21m
^ harbor-clair-ver-1 Secret kapp - ok - 21m
^ harbor-core Deployment kapp 2/2 t ok - 21m
^ harbor-core Endpoints cluster - ok - 21m
^ harbor-core Service kapp - ok - 21m
^ harbor-core-65c8645b68 ReplicaSet cluster - ok - 21m
^ harbor-core-65c8645b68-gzcpx Pod cluster 4/4 t ok - 21m
^ harbor-core-internal-cert Certificate kapp 1/1 t ok - 21m
^ harbor-core-ver-1 ConfigMap kapp - ok - 21m
^ harbor-core-ver-1 Secret kapp - ok - 21m
^ harbor-database Endpoints cluster - ok - 21m
^ harbor-database Service kapp - ok - 21m
^ harbor-database StatefulSet kapp - ok - 21m
^ harbor-database-0 Pod cluster 4/4 t ok - 21m
^ harbor-database-6d86d45b5b ControllerRevision cluster - ok - 21m
^ harbor-database-ver-1 Secret kapp - ok - 21m
^ harbor-httpproxy HTTPProxy kapp - ok - 21m
^ harbor-httpproxy-notary HTTPProxy kapp - ok - 21m
^ harbor-jobservice ConfigMap kapp - ok - 21m
^ harbor-jobservice Deployment kapp 2/2 t ok - 21m
^ harbor-jobservice Endpoints cluster - ok - 21m
^ harbor-jobservice PersistentVolumeClaim kapp - ok - 21m
^ harbor-jobservice Service kapp - ok - 21m
^ harbor-jobservice-659db5485 ReplicaSet cluster - ok - 21m
^ harbor-jobservice-659db5485-x9mmf Pod cluster 4/4 t ok - 21m
^ harbor-jobservice-env-ver-1 ConfigMap kapp - ok - 21m
^ harbor-jobservice-internal-cert Certificate kapp 1/1 t ok - 21m
^ harbor-jobservice-ver-1 Secret kapp - ok - 21m
^ harbor-notary-server Deployment kapp 2/2 t ok - 21m
^ harbor-notary-server Endpoints cluster - ok - 21m
^ harbor-notary-server Service kapp - ok - 21m
^ harbor-notary-server-7b89b45986 ReplicaSet cluster - ok - 21m
^ harbor-notary-server-7b89b45986-87pgx Pod cluster 4/4 t ok - 21m
^ harbor-notary-server-ver-1 Secret kapp - ok - 21m
^ harbor-notary-signer Deployment kapp 2/2 t ok - 21m
^ harbor-notary-signer Endpoints cluster - ok - 21m
^ harbor-notary-signer Service kapp - ok - 21m
^ harbor-notary-signer-58b78f6886 ReplicaSet cluster - ok - 21m
^ harbor-notary-signer-58b78f6886-6m2nt Pod cluster 4/4 t ok - 21m
^ harbor-notary-signer-cert Certificate kapp 1/1 t ok - 21m
^ harbor-portal ConfigMap kapp - ok - 21m
^ harbor-portal Deployment kapp 2/2 t ok - 21m
^ harbor-portal Endpoints cluster - ok - 21m
^ harbor-portal Service kapp - ok - 21m
^ harbor-portal-7d54fd646d ReplicaSet cluster - ok - 21m
^ harbor-portal-7d54fd646d-vr8gf Pod cluster 4/4 t ok - 21m
^ harbor-portal-internal-cert Certificate kapp 1/1 t ok - 21m
^ harbor-redis Endpoints cluster - ok - 21m
^ harbor-redis Service kapp - ok - 21m
^ harbor-redis StatefulSet kapp - ok - 21m
^ harbor-redis-0 Pod cluster 4/4 t ok - 21m
^ harbor-redis-bc4b794cd ControllerRevision cluster - ok - 21m
^ harbor-registry Deployment kapp 2/2 t ok - 21m
^ harbor-registry Endpoints cluster - ok - 21m
^ harbor-registry PersistentVolumeClaim kapp - ok - 21m
^ harbor-registry Service kapp - ok - 21m
^ harbor-registry-785b5fc7f7 ReplicaSet cluster - ok - 21m
^ harbor-registry-785b5fc7f7-s8czt Pod cluster 4/4 t ok - 21m
^ harbor-registry-internal-cert Certificate kapp 1/1 t ok - 21m
^ harbor-registry-ver-1 ConfigMap kapp - ok - 21m
^ harbor-registry-ver-1 Secret kapp - ok - 21m
^ harbor-self-signed-ca-issuer Issuer kapp 1/1 t ok - 21m
^ harbor-tls-cert Certificate kapp 1/1 t ok - 21m
^ harbor-token-service-cert Certificate kapp 1/1 t ok - 21m
^ harbor-trivy Endpoints cluster - ok - 21m
^ harbor-trivy Service kapp - ok - 21m
^ harbor-trivy StatefulSet kapp - ok - 21m
^ harbor-trivy-0 Pod cluster 4/4 t ok - 21m
^ harbor-trivy-596bc6c47c ControllerRevision cluster - ok - 21m
^ harbor-trivy-internal-cert Certificate kapp 1/1 t ok - 21m
^ harbor-trivy-ver-1 Secret kapp - ok - 21m
Rs: Reconcile state
Ri: Reconcile information
82 resources
Succeeded
One thing that threw me was that I was expecting to see an ingress resource created that would provide access in to Harbor via Contour/Envoy…but there were no ingresses created even though I could get to harbor.corp.tanzu. What I found was a resource specific to Contour called an httpproxy:
kubectl -n tanzu-system-registry get httpproxy
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
harbor-httpproxy harbor.corp.tanzu harbor-tls valid valid HTTPProxy
harbor-httpproxy-notary notary.harbor.corp.tanzu harbor-tls valid valid HTTPProxy
We can take a closer look at the harbor-httpproxy
resource and confirm that it is setup to provide ingress for Harbor (output is truncated).
kubectl -n tanzu-system-registry describe httpproxy harbor-httpproxy
Spec:
Routes:
Conditions:
Prefix: /
Services:
Name: harbor-portal
Port: 443
Conditions:
Prefix: /api/
Services:
Name: harbor-core
Port: 443
Conditions:
Prefix: /service/
Services:
Name: harbor-core
Port: 443
Conditions:
Prefix: /v2/
Services:
Name: harbor-core
Port: 443
Conditions:
Prefix: /chartrepo/
Services:
Name: harbor-core
Port: 443
Conditions:
Prefix: /c/
Services:
Name: harbor-core
Port: 443
Virtualhost:
Fqdn: harbor.corp.tanzu
Tls:
Secret Name: harbor-tls
Status:
Current Status: valid
Description: valid HTTPProxy
Load Balancer:
Ingress:
Ip: 10.40.14.32
We’ll need to get a copy of the Harbor CA certificate so that we can make secure connections to it. There are a number of ways of getting this certificate and this example queries the harbor-ca-key-pair secret that is related to the harbor-ca certificate. You can see the process behind getting this certificate in the following example
kubectl -n tanzu-system-registry get certificate
NAME READY SECRET AGE
harbor-ca True harbor-ca-key-pair 36m
harbor-clair-internal-cert True harbor-clair-internal-tls 36m
harbor-core-internal-cert True harbor-core-internal-tls 36m
harbor-jobservice-internal-cert True harbor-jobservice-internal-tls 36m
harbor-notary-signer-cert True harbor-notary-signer 36m
harbor-portal-internal-cert True harbor-portal-internal-tls 36m
harbor-registry-internal-cert True harbor-registry-internal-tls 36m
harbor-tls-cert True harbor-tls 36m
harbor-token-service-cert True harbor-token-service 36m
harbor-trivy-internal-cert True harbor-trivy-internal-tls 36m
kubectl -n tanzu-system-registry get secret harbor-ca-key-pair -o 'go-template={{ index .data "ca.crt" }}' | base64 -d > ca.crt
With the Harbor CA certificate handy, we can put it in place on the local Linux system so that we can use the new Harbor registry.
sudo mkdir -p /etc/docker/certs.d/harbor.corp.tanzu
sudo mv ca.crt /etc/docker/certs.d/harbor.corp.tanzu/
sudo systemctl restart docker
Before we can access Harbor, we need to install a new component called the TKG Connectivity API into the management cluster and configure an FQDN and virtual IP (VIP) address for Harbor.
Download and extract the connectivity bundle file. You should end up with a manifests
directory whose contents should look like:
manifests/tanzu-registry/
manifests/tanzu-registry/certs.yaml
manifests/tanzu-registry/configmap.yaml
manifests/tanzu-registry/deployment.yaml
manifests/tanzu-registry/mutating-webhook-configuration.yaml
manifests/tanzu-registry/values.yaml
manifests/tkg-connectivity-operator/
manifests/tkg-connectivity-operator/deployment.yaml
manifests/tkg-connectivity-operator/values.yaml
The file that we’re most concerned with is the manifests/tanzu-registry/values.yaml
file.
#@data/values
---
image: registry.tkg.vmware.run/tkg-connectivity/tanzu-registry-webhook:v1.2.0_vmware.2
imagePullPolicy: IfNotPresent
registry:
# [REQUIRED] If "true", configures every Machine to enable connectivity to Harbor, do nothing otherwise.
enabled:
# [REQUIRED] If "true", override DNS for Harbor's FQDN by injecting entries into /etc/hosts of every machine, do nothing otherwise.
dnsOverride:
# [REQUIRED] FQDN used to connect Harbor
fqdn:
# [REQUIRED] the VIP used to connect to Harbor.
vip:
# [REQUIRED] If "true", iptable rules will be added to proxy connections to the VIP to the harbor cluster.
# If "false", the IP address specified the 'vip' field should be a routable address in the network.
bootstrapProxy:
# [REQUIRED] The root CA containerd should trust in every cluster.
rootCA:
We’ll obviously need to update this with the information relevant for our deployment. Since I have a resolvable FQDN for harbor.corp.tanzu, I can set dnsOverride
to false
. I can also set the bootstrapProxy
value to false since no proxy is needed. The rootCA value will be taken from the ca.crt file that was created earlier. The following is what this file looks like after editing:
#@data/values
---
image: registry.tkg.vmware.run/tkg-connectivity/tanzu-registry-webhook:v1.2.0_vmware.2
imagePullPolicy: IfNotPresent
registry:
# [REQUIRED] If "true", configures every Machine to enable connectivity to Harbor, do nothing otherwise.
enabled: "true"
# [REQUIRED] If "true", override DNS for Harbor's FQDN by injecting entries into /etc/hosts of every machine, do nothing otherwise.
dnsOverride: "false"
# [REQUIRED] FQDN used to connect Harbor
fqdn: "harbor.corp.tanzu"
# [REQUIRED] the VIP used to connect to Harbor.
vip: "10.40.14.0"
# [REQUIRED] If "true", iptable rules will be added to proxy connections to the VIP to the harbor cluster.
# If "false", the IP address specified the 'vip' field should be a routable address in the network.
bootstrapProxy: "false"
# [REQUIRED] The root CA containerd should trust in every cluster.
rootCA: |
-----BEGIN CERTIFICATE-----
MIIDOzCCAiOgAwIBAgIQM8Oe9AWsu3SG5d7nRdXO2zANBgkqhkiG9w0BAQsFADAt
MRcwFQYDVQQKEw5Qcm9qZWN0IEhhcmJvcjESMBAGA1UEAxMJSGFyYm9yIENBMB4X
DTIwMTAxNTE3NTIzM1oXDTMwMTAxMzE3NTIzM1owLTEXMBUGA1UEChMOUHJvamVj
dCBIYXJib3IxEjAQBgNVBAMTCUhhcmJvciBDQTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAMg9d0SR2Ej4++m11rZ/evOIHou9Yhe/7W/XNGbYk9WIfmV4
zMrIC2OIhNkZ1nik6MmtL73Jzy75XiNpcblucO+IKNKG9+f5N1cAc9CLlIhl9uwW
zDY42eVDfp9o5b0iXJuAOFcqI7pVc82hqHI+oOfogCbjKYE1aF6Jbr/9jK1wdABi
oc1kco2bxwvZ8MIbD/NaIZKiGu7Aw+4qo0cdH0bJdB+Dh2JjeimJJl4HKcQp1hbX
mZWPyLfZ9w/oH5QwGQNIgVuQLVr2hulxkOAa5fGWQlGYPTiXaI6F9VbhGuKWIjBl
fpSEUiaWMbVmBBwOVQHnoKZDSC7Eh8DXqoeFMAECAwEAAaNXMFUwDgYDVR0PAQH/
BAQDAgIEMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8E
BTADAQH/MBMGA1UdEQQMMAqCCGhhcmJvcmNhMA0GCSqGSIb3DQEBCwUAA4IBAQB5
av6bQXDnu/t9Pzrd/nhlbW+ObWQct/du/eT6RtZKZIy6a4muO7pK8x2P6NiajkIY
vbZ2BqpQ9EZm1c9Y0s9SyGzAzoQSfAldr185s5O+siaC/R2Ec+VAM/5T77K2rcRx
iyavZgUCXUj3meYl/JFI4XsGBha+lTCO4dWQJ4l5r8MSxL7krSOXH93GAVXJRZbs
948wdWVQMCJR/EVbJxyozDhLJg6L9kYEhNUbO5W2OpSphmmGFMA3AGSBh1qJOYvH
9zxlmVz6I9N514aWqlycLyautI9Tq+RzEnhZcyQyfU+mGGhslO0Dqt2wEY1NIQco
Jex1H1mf3MTjcNTKQ9y6
-----END CERTIFICATE-----
With this file updated appropriately, we can install the customized TKG Connectivity API on the management cluster
ytt --ignore-unknown-comments -f manifests/tanzu-registry | kubectl apply -f -
namespace/tanzu-system-registry created
issuer.cert-manager.io/tanzu-registry-webhook created
certificate.cert-manager.io/tanzu-registry-webhook-certs created
service/tanzu-registry-webhook created
serviceaccount/tanzu-registry-webhook created
clusterrolebinding.rbac.authorization.k8s.io/tanzu-registry-webhook created
clusterrole.rbac.authorization.k8s.io/tanzu-registry-webhook created
deployment.apps/tanzu-registry-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/tanzu-registry-webhook created
configmap/tanzu-registry-configuration created
ytt -f manifests/tkg-connectivity-operator | kubectl apply -f -
namespace/tanzu-system-connectivity created
serviceaccount/tkg-connectivity-operator created
deployment.apps/tkg-connectivity-operator created
clusterrolebinding.rbac.authorization.k8s.io/tkg-connectivity-operator created
clusterrole.rbac.authorization.k8s.io/tkg-connectivity-operator created
configmap/tkg-connectivity-docker-image-config created
The most visible result of this exercise is a configmap in the newly-created tanzu-system-registry namespace called tanzu-registry-configuration.
Name: tanzu-registry-configuration
Namespace: tanzu-system-registry
Labels: <none>
Annotations: <none>
Data
====
registry.client.enabled:
----
true
registry.client.fqdn:
----
harbor.corp.tanzu
registry.client.vip:
----
10.40.14.0
registry.client.bootstrapProxy:
----
false
registry.client.ca:
----
-----BEGIN CERTIFICATE-----
MIIDOzCCAiOgAwIBAgIQM8Oe9AWsu3SG5d7nRdXO2zANBgkqhkiG9w0BAQsFADAt
MRcwFQYDVQQKEw5Qcm9qZWN0IEhhcmJvcjESMBAGA1UEAxMJSGFyYm9yIENBMB4X
DTIwMTAxNTE3NTIzM1oXDTMwMTAxMzE3NTIzM1owLTEXMBUGA1UEChMOUHJvamVj
dCBIYXJib3IxEjAQBgNVBAMTCUhhcmJvciBDQTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAMg9d0SR2Ej4++m11rZ/evOIHou9Yhe/7W/XNGbYk9WIfmV4
zMrIC2OIhNkZ1nik6MmtL73Jzy75XiNpcblucO+IKNKG9+f5N1cAc9CLlIhl9uwW
zDY42eVDfp9o5b0iXJuAOFcqI7pVc82hqHI+oOfogCbjKYE1aF6Jbr/9jK1wdABi
oc1kco2bxwvZ8MIbD/NaIZKiGu7Aw+4qo0cdH0bJdB+Dh2JjeimJJl4HKcQp1hbX
mZWPyLfZ9w/oH5QwGQNIgVuQLVr2hulxkOAa5fGWQlGYPTiXaI6F9VbhGuKWIjBl
fpSEUiaWMbVmBBwOVQHnoKZDSC7Eh8DXqoeFMAECAwEAAaNXMFUwDgYDVR0PAQH/
BAQDAgIEMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8E
BTADAQH/MBMGA1UdEQQMMAqCCGhhcmJvcmNhMA0GCSqGSIb3DQEBCwUAA4IBAQB5
av6bQXDnu/t9Pzrd/nhlbW+ObWQct/du/eT6RtZKZIy6a4muO7pK8x2P6NiajkIY
vbZ2BqpQ9EZm1c9Y0s9SyGzAzoQSfAldr185s5O+siaC/R2Ec+VAM/5T77K2rcRx
iyavZgUCXUj3meYl/JFI4XsGBha+lTCO4dWQJ4l5r8MSxL7krSOXH93GAVXJRZbs
948wdWVQMCJR/EVbJxyozDhLJg6L9kYEhNUbO5W2OpSphmmGFMA3AGSBh1qJOYvH
9zxlmVz6I9N514aWqlycLyautI9Tq+RzEnhZcyQyfU+mGGhslO0Dqt2wEY1NIQco
Jex1H1mf3MTjcNTKQ9y6
-----END CERTIFICATE-----
registry.client.dnsOverride:
----
false
Events: <none>
This information is used to modify the containerd configuration on the nodes in the workload cluster such that the harbor certificate is trusted and images can be pulled from it. If we ssh to one of the nodes, you can see this first-hand.
ls /etc/containerd
config.toml
harbor_addon.toml
harbor.crt
# Use config version 2 to enable new configuration fields.
# Config file is parsed as version 1 by default.
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.tkg.vmware.run/pause:3.2"
[plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.corp.tanzu".tls]
ca_file = "/etc/containerd/harbor.crt"
-----BEGIN CERTIFICATE-----
MIIDOzCCAiOgAwIBAgIQM8Oe9AWsu3SG5d7nRdXO2zANBgkqhkiG9w0BAQsFADAt
MRcwFQYDVQQKEw5Qcm9qZWN0IEhhcmJvcjESMBAGA1UEAxMJSGFyYm9yIENBMB4X
DTIwMTAxNTE3NTIzM1oXDTMwMTAxMzE3NTIzM1owLTEXMBUGA1UEChMOUHJvamVj
dCBIYXJib3IxEjAQBgNVBAMTCUhhcmJvciBDQTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAMg9d0SR2Ej4++m11rZ/evOIHou9Yhe/7W/XNGbYk9WIfmV4
zMrIC2OIhNkZ1nik6MmtL73Jzy75XiNpcblucO+IKNKG9+f5N1cAc9CLlIhl9uwW
zDY42eVDfp9o5b0iXJuAOFcqI7pVc82hqHI+oOfogCbjKYE1aF6Jbr/9jK1wdABi
oc1kco2bxwvZ8MIbD/NaIZKiGu7Aw+4qo0cdH0bJdB+Dh2JjeimJJl4HKcQp1hbX
mZWPyLfZ9w/oH5QwGQNIgVuQLVr2hulxkOAa5fGWQlGYPTiXaI6F9VbhGuKWIjBl
fpSEUiaWMbVmBBwOVQHnoKZDSC7Eh8DXqoeFMAECAwEAAaNXMFUwDgYDVR0PAQH/
BAQDAgIEMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8E
BTADAQH/MBMGA1UdEQQMMAqCCGhhcmJvcmNhMA0GCSqGSIb3DQEBCwUAA4IBAQB5
av6bQXDnu/t9Pzrd/nhlbW+ObWQct/du/eT6RtZKZIy6a4muO7pK8x2P6NiajkIY
vbZ2BqpQ9EZm1c9Y0s9SyGzAzoQSfAldr185s5O+siaC/R2Ec+VAM/5T77K2rcRx
iyavZgUCXUj3meYl/JFI4XsGBha+lTCO4dWQJ4l5r8MSxL7krSOXH93GAVXJRZbs
948wdWVQMCJR/EVbJxyozDhLJg6L9kYEhNUbO5W2OpSphmmGFMA3AGSBh1qJOYvH
9zxlmVz6I9N514aWqlycLyautI9Tq+RzEnhZcyQyfU+mGGhslO0Dqt2wEY1NIQco
Jex1H1mf3MTjcNTKQ9y6
-----END CERTIFICATE-----
Now we can log into Harbor and test pushing an image.
docker login harbor.corp.tanzu -u admin -p VMware1!
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/ubuntu/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
df8698476c65: Pull complete
Digest: sha256:d366a4665ab44f0648d7a00ae3fae139d55e32f9712c67accd604bb55df9d05a
Status: Downloaded newer image for busybox:latest
docker.io/library/busybox:latest
docker tag busybox harbor.corp.tanzu/library/busybox
docker push harbor.corp.tanzu/library/busybox
The push refers to repository [harbor.corp.tanzu/library/busybox]
be8b8b42328a: Pushed
latest: digest: sha256:2ca5e69e244d2da7368f7088ea3ad0653c3ce7aaccd0b8823d11b0d5de956002 size: 527
You should also add the Harbor CA certificate to your trusted root certificate authority so that you don’t get prompted about a potentially insecure connection when you browse to the Harbor UI.

You can login as admin with VMware1! as the password. Once logged in, if you drill down into the Library project you’ll see the busybox image that was just pushed.
You should be able to create a pod using the busybox image in the Harbor registry now.
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: harbor.corp.tanzu/library/busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
kubectl apply -f harbor-busybox.yaml
pod/busybox created
kubectl get events
78s Normal Scheduled pod/busybox Successfully assigned default/busybox to tkg-wld-md-0-7c74bfd7ff-6f492
74s Normal Pulling pod/busybox Pulling image "harbor.corp.tanzu/library/busybox"
70s Normal Pulled pod/busybox Successfully pulled image "harbor.corp.tanzu/library/busybox" in 3.893880792s
66s Normal Created pod/busybox Created container busybox
65s Normal Started pod/busybox Started container busybox
Bonus! If you really want to dig into what’s happening with Contour/Envoy, you can configure a port-forward on the Envoy pod so that you can access the Envoy administration interface:
ENVOY_POD=$(kubectl -n tanzu-system-ingress get pod -l app=envoy -o name | head -1)
kubectl -n tanzu-system-ingress port-forward $ENVOY_POD 9001
Forwarding from 127.0.0.1:9001 -> 9001
And you can then open a browser on the same system and navigate to http://127.0.0.1:9001.
If we dig into the certs, we can see that the Harbor certificate(s) has been installed:
{
"ca_cert": [],
"cert_chain": [
{
"path": "\u003cinline\u003e",
"serial_number": "93119ed885a70f9d6cb07bedbae26b7f",
"subject_alt_names": [
{
"dns": "harbor.corp.tanzu"
},
{
"dns": "notary.harbor.corp.tanzu"
}
],
"days_until_expiration": "3649",
"valid_from": "2020-09-28T17:47:22Z",
"expiration_time": "2030-09-26T17:47:22Z"
}
]
},
{
"ca_cert": [],
"cert_chain": []
},
{
"ca_cert": [],
"cert_chain": []
},
{
"ca_cert": [],
"cert_chain": [
{
"path": "\u003cinline\u003e",
"serial_number": "93119ed885a70f9d6cb07bedbae26b7f",
"subject_alt_names": [
{
"dns": "harbor.corp.tanzu"
},
{
"dns": "notary.harbor.corp.tanzu"
}
],
"days_until_expiration": "3649",
"valid_from": "2020-09-28T17:47:22Z",
"expiration_time": "2030-09-26T17:47:22Z"
}
Another fairly slick thing you can do is to get a visualization of the traffic flows through Contour. This is achieved via configuring a port-forward on the Contour pod and then saving Contour’s Directed Acyclic Graph (DAG) data into a png file.
CONTOUR_POD=$(kubectl -n tanzu-system-ingress get pod -l app=contour -o name | head -1)
kubectl -n tanzu-system-ingress port-forward $CONTOUR_POD 6060
Forwarding from 127.0.0.1:6060 -> 6060
curl localhost:6060/debug/dag | dot -T png > contour-dag.png
The resultant png file should look similar to the following:
I did a post a while back entitled How to use Contour to provide ingress for Prometheus, Grafana and AlertManager in a TKG cluster where I was using the kube-prometheus project in conjunction with Contour to get Prometheus, Grafana adn AlertManager up and running in a TKG cluster. You can see that this process is simpler now with the extension framework.
The install is broken into two parts with Prometheus and AlertManager being first and Grafan being second.
As with Contour and Harbor, the first thing to do is create the namespace and related role objects. For Prometheus, Grafana and AlertManager, that namespace name is tanzu-system-monitoring.
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/monitoring/prometheus/namespace-role.yaml
namespace/tanzu-system-monitoring created
serviceaccount/prometheus-extension-sa created
role.rbac.authorization.k8s.io/prometheus-extension-role created
rolebinding.rbac.authorization.k8s.io/prometheus-extension-rolebinding created
clusterrole.rbac.authorization.k8s.io/prometheus-extension-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-extension-cluster-rolebinding created
There is a sample file for fine-tuning the Prometheus and AlertManager configuration at tkg-extensions-v1.2.0+vmware.1/extensions/monitoring/prometheus/vsphere/prometheus-data-values.yaml.example
. You can see that this file is essentially empty:
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
monitoring:
prometheus_server:
image:
repository: registry.tkg.vmware.run/prometheus
alertmanager:
image:
repository: registry.tkg.vmware.run/prometheus
kube_state_metrics:
image:
repository: registry.tkg.vmware.run/prometheus
node_exporter:
image:
repository: registry.tkg.vmware.run/prometheus
pushgateway:
image:
repository: registry.tkg.vmware.run/prometheus
cadvisor:
image:
repository: registry.tkg.vmware.run/prometheus
prometheus_server_configmap_reload:
image:
repository: registry.tkg.vmware.run/prometheus
prometheus_server_init_container:
image:
repository: registry.tkg.vmware.run/prometheus
We’ll create a file similar to this one but with some extra information in it…primarily so that we can allow our for an Ingress to be created for Prometheus and AlertManager via our previously deployed Contour application.
cat > prometheus-data-values.yaml <<EOF
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
monitoring:
ingress:
enabled: true
virtual_host_fqdn: "prometheus.corp.tanzu"
prometheus_prefix: "/"
alertmanager_prefix: "/alertmanager/"
prometheus_server:
image:
repository: registry.tkg.vmware.run/prometheus
alertmanager:
image:
repository: registry.tkg.vmware.run/prometheus
kube_state_metrics:
image:
repository: registry.tkg.vmware.run/prometheus
node_exporter:
image:
repository: registry.tkg.vmware.run/prometheus
pushgateway:
image:
repository: registry.tkg.vmware.run/prometheus
cadvisor:
image:
repository: registry.tkg.vmware.run/prometheus
prometheus_server_configmap_reload:
image:
repository: registry.tkg.vmware.run/prometheus
prometheus_server_init_container:
image:
repository: registry.tkg.vmware.run/prometheus
EOF
You can see from this example that I’m specifying a hostname of prometheus.corp.tanzu and noting that the root of this address will direct to Prometheus while appending /alertmanager/
to the URL will direct to AlertManager.
We’ll use the prometheus-data-values.yaml
file to create a secret with all of the configuration information that Prometheus and AlertManager will need.
kubectl create secret generic prometheus-data-values --from-file=values.yaml=prometheus-data-values.yaml -n tanzu-system-monitoring
secret/prometheus-data-values created
You can inspect the secret that was created in the same fashion that was done for the contour-data-values
secret and the harbor-data-values
secret:
kubectl -n tanzu-system-monitoring get secrets prometheus-data-values -o 'go-template={{ index .data "values.yaml"}}' |base64 -d
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
monitoring:
ingress:
enabled: true
virtual_host_fqdn: "prometheus.corp.tanzu"
prometheus_prefix: "/"
alertmanager_prefix: "/alertmanager/"
prometheus_server:
image:
repository: registry.tkg.vmware.run/prometheus
alertmanager:
image:
repository: registry.tkg.vmware.run/prometheus
kube_state_metrics:
image:
repository: registry.tkg.vmware.run/prometheus
node_exporter:
image:
repository: registry.tkg.vmware.run/prometheus
pushgateway:
image:
repository: registry.tkg.vmware.run/prometheus
cadvisor:
image:
repository: registry.tkg.vmware.run/prometheus
prometheus_server_configmap_reload:
image:
repository: registry.tkg.vmware.run/prometheus
prometheus_server_init_container:
image:
repository: registry.tkg.vmware.run/prometheus
And now we can install the Prometheus and AlertManager extension. We don’t need to make any changes to the supplied yaml file but you can take a look at it to better understand what will be deployed.
# prometheus k14s objects managed by extension manager
---
apiVersion: clusters.tmc.cloud.vmware.com/v1alpha1
kind: Extension
metadata:
name: prometheus
namespace: tanzu-system-monitoring
annotations:
tmc.cloud.vmware.com/managed: "false"
spec:
description: prometheus
version: "v2.17.1_vmware.1"
name: prometheus
namespace: tanzu-system-monitoring
deploymentStrategy:
type: KUBERNETES_NATIVE
objects: |
apiVersion: kappctrl.k14s.io/v1alpha1
kind: App
metadata:
name: prometheus
annotations:
tmc.cloud.vmware.com/orphan-resource: "true"
spec:
syncPeriod: 5m
serviceAccountName: prometheus-extension-sa
fetch:
- image:
url: registry.tkg.vmware.run/tkg-extensions-templates:v1.2.0_vmware.1
template:
- ytt:
ignoreUnknownComments: true
paths:
- tkg-extensions/common
- tkg-extensions/monitoring/prometheus
inline:
pathsFrom:
- secretRef:
name: prometheus-data-values
deploy:
- kapp:
rawOptions: ["--wait-timeout=5m"]
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/monitoring/prometheus/prometheus-extension.yaml
extension.clusters.tmc.cloud.vmware.com/prometheus created
There will be loads of resources getting created in the tanzu-system-registry
namespace:
kubectl -n tanzu-system-monitoring get all
NAME READY STATUS RESTARTS AGE
pod/prometheus-alertmanager-7697c69b54-qnw5f 2/2 Running 0 78s
pod/prometheus-cadvisor-26gc7 1/1 Running 0 78s
pod/prometheus-kube-state-metrics-776cd658b4-4gndk 1/1 Running 0 79s
pod/prometheus-node-exporter-mbp2k 1/1 Running 0 79s
pod/prometheus-pushgateway-6c69fdcb9d-tb42s 1/1 Running 0 78s
pod/prometheus-server-7c4b69bbc6-npfjp 2/2 Running 0 79s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus-alertmanager ClusterIP 100.70.240.63 <none> 80/TCP 81s
service/prometheus-kube-state-metrics ClusterIP None <none> 80/TCP,81/TCP 80s
service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 80s
service/prometheus-pushgateway ClusterIP 100.69.153.228 <none> 9091/TCP 78s
service/prometheus-server ClusterIP 100.66.3.228 <none> 80/TCP 80s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prometheus-cadvisor 1 1 1 1 1 <none> 79s
daemonset.apps/prometheus-node-exporter 1 1 1 1 1 <none> 80s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-alertmanager 1/1 1 1 81s
deployment.apps/prometheus-kube-state-metrics 1/1 1 1 80s
deployment.apps/prometheus-pushgateway 1/1 1 1 79s
deployment.apps/prometheus-server 1/1 1 1 81s
NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-alertmanager-84bc57785b 1 1 1 80s
replicaset.apps/prometheus-kube-state-metrics-776cd658b4 1 1 1 80s
replicaset.apps/prometheus-pushgateway-6c69fdcb9d 1 1 1 78s
replicaset.apps/prometheus-server-859b64755d 1 1 1 80s
There are also a few persistentvolumeclaims and persistentvolumes that are created to support Prometheus and AlertManager.
kubectl -n tanzu-system-monitoring get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-5144b72f-1201-44ba-986a-8e338f44480e 2Gi RWO Delete Bound tanzu-system-monitoring/prometheus-alertmanager standard 3m45s
persistentvolume/pvc-60da95e1-aa57-40e2-a4ec-8603c9f8f115 8Gi RWO Delete Bound tanzu-system-monitoring/prometheus-server standard 3m48s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/prometheus-alertmanager Bound pvc-5144b72f-1201-44ba-986a-8e338f44480e 2Gi RWO standard 3m53s
persistentvolumeclaim/prometheus-server Bound pvc-60da95e1-aa57-40e2-a4ec-8603c9f8f115 8Gi RWO standard 3m52s
As with Harbor, we should see a new httprpoxy resource created.
kubectl -n tanzu-system-monitoring get httpproxy
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
prometheus-httpproxy prometheus.corp.tanzu prometheus-tls valid valid HTTPProxy
We can take a closer look at the prometheus-httpproxy
resource and confirm that it is setup to provide ingress for Prometheus and AlertManager (output is truncated).
kubectl -n tanzu-system-monitoring describe httpproxy prometheus-httpproxy
Spec:
Routes:
Conditions:
Prefix: /
Path Rewrite Policy:
Replace Prefix:
Prefix: /
Replacement: /
Services:
Name: prometheus-server
Port: 80
Conditions:
Prefix: /alertmanager/
Path Rewrite Policy:
Replace Prefix:
Prefix: /alertmanager/
Replacement: /
Services:
Name: prometheus-alertmanager
Port: 80
Virtualhost:
Fqdn: prometheus.corp.tanzu
Tls:
Secret Name: prometheus-tls
Status:
Current Status: valid
Description: valid HTTPProxy
Load Balancer:
Ingress:
Ip: 10.40.14.32
After adding a DNS record pointing prometheus.corp.tanzu to 10.40.14.32, we should be able to access Prometheus and AlertManger. However, we’ll get a certificate error and I want to avoid that. We can download the certificate that was generated and then import it to the trusted root certificate authority.
kubectl -n tanzu-system-monitoring get secrets prometheus-tls -o 'go-template={{ index .data "ca.crt"}}' |base64 -d
-----BEGIN CERTIFICATE-----
MIIDTzCCAjegAwIBAgIQD9l8NB8wngFtiekXdV6sCjANBgkqhkiG9w0BAQsFADA1
MRswGQYDVQQKExJQcm9qZWN0IFByb21ldGhldXMxFjAUBgNVBAMTDVByb21ldGhl
dXMgQ0EwHhcNMjAxMDA1MjAyMDI1WhcNMzAxMDAzMjAyMDI1WjA1MRswGQYDVQQK
ExJQcm9qZWN0IFByb21ldGhldXMxFjAUBgNVBAMTDVByb21ldGhldXMgQ0EwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCs0Mjt1RjOE8tvUWne5u6XjFmu
jNO61ft3Or0zubpCbYPN9pxrL54Suq3RYd+OhSgm0jlvy36tr3TmsMhf0gzG1xN5
m2zlMYqgoH9B2G+XlL67jIIHYQdsua5zBkMLf+0AbkCl2uwINKrLl8kVhcbmFsjU
qiMveYNhI6dYWuiHb5XCVqckG7n6+j9IkDV0gp6fb2tMHDxRbAZxtApX1q4XUGMQ
WfYJ9W+ZhqNFxuBZhRKEZFPgIlYqBqKStXWu0c6HH9Rsy+vNUgyfRjEvsB0IHzEQ
FyrOPiB+5yZ7NhYJykRoBx18oBCk/lIaZdIFPl7w1pCQLXmbuvAjXve6mgUhAgMB
AAGjWzBZMA4GA1UdDwEB/wQEAwICBDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYB
BQUHAwIwDwYDVR0TAQH/BAUwAwEB/zAXBgNVHREEEDAOggxwcm9tZXRoZXVzY2Ew
DQYJKoZIhvcNAQELBQADggEBAHSyhQ/PMB71BafIug2l4X29WoO4oPI3BzhuSEa3
DCcN5hRCv3qbztLm5yOAN91E+sEztI7o6WW6wwG2YjCWlirKPt0B9CIBgSnJIQJl
D1nWz2MFHL+TLTS6BP57ZEXZ4pu+9nL67RRiMlK1268LOlKb0VWDPvY9gKvFNNl9
PtnV0vCkZXs0eqZUHqgnNVNAC663kTakeRIcsSlDsDyNsmyRB1geDDL2yg0j9/HL
v6FsQ9E1FeOzfLiSb6VjmDH3Aq9zgOUCXAjjFawTEDh5D3Y5PiV79EJ7QltFwx+D
sxXj+iNnZjf47f6RoPEQVMJVMjWpIoqCFsA24hzlZTshZCA=
-----END CERTIFICATE-----

Installing Grafana is going to be nearly identical with the exception of the files being located in a different folder.
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/monitoring/grafana/namespace-role.yaml
namespace/tanzu-system-monitoring unchanged
serviceaccount/grafana-extension-sa created
role.rbac.authorization.k8s.io/grafana-extension-role created
rolebinding.rbac.authorization.k8s.io/grafana-extension-rolebinding created
clusterrole.rbac.authorization.k8s.io/grafana-extension-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/grafana-extension-cluster-rolebinding created
Again, there is a sample file for fine-tuning the Grafana configuration at tkg-extensions-v1.2.0+vmware.1/extensions/monitoring/grafana/vsphere/grafana-data-values.yaml.example
.
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
monitoring:
grafana:
image:
repository: "registry.tkg.vmware.run/grafana"
secret:
admin_password: <ADMIN_PASSWORD>
grafana_init_container:
image:
repository: "registry.tkg.vmware.run/grafana"
grafana_sc_dashboard:
image:
repository: "registry.tkg.vmware.run/grafana"
We’ll create a file similar to this one but with some extra information in it…primarily so that we can allow our for an Ingress to be created for Prometheus and AlertManager via our previously deployed Contour application. We’ll also need to base64 encode our desired password (VMware1! in this example).
echo -n "VMware1!" | base64
Vk13YXJlMSE=
cat > grafana-data-values.yaml <<EOF
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
monitoring:
grafana:
image:
repository: "registry.tkg.vmware.run/grafana"
ingress:
enabled: true
virtual_host_fqdn: "grafana.corp.tanzu"
prefix: "/"
secret:
admin_password: "Vk13YXJlMSE="
grafana_init_container:
image:
repository: "registry.tkg.vmware.run/grafana"
grafana_sc_dashboard:
image:
repository: "registry.tkg.vmware.run/grafana"
EOF
You can see from this example that I’m specifying a hostname of grafana.corp.tanzu.
We’ll use the grafana-data-values.yaml
file to create a secret with all of the configuration information that Grafana will need.
kubectl create secret generic grafana-data-values --from-file=values.yaml=grafana-data-values.yaml -n tanzu-system-monitoring
secret/grafana-data-values created
You can inspect the secret that was created in the same fashion that was done for the contour-data-values
secret and the harbor-data-values
secret:
kubectl -n tanzu-system-monitoring get secrets grafana-data-values -o 'go-template={{ index .data "values.yaml"}}' |base64 -d
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
monitoring:
grafana:
image:
repository: "registry.tkg.vmware.run/grafana"
ingress:
enabled: true
virtual_host_fqdn: "grafana.corp.tanzu"
prefix: "/"
secret:
admin_password: "Vk13YXJlMSE="
grafana_init_container:
image:
repository: "registry.tkg.vmware.run/grafana"
grafana_sc_dashboard:
image:
repository: "registry.tkg.vmware.run/grafana"
And now we can install the Grafana extension. We don’t need to make any changes to the supplied yaml file but you can take a look at it to better understand what will be deployed.
# grafana k14s objects managed by extension manager
---
apiVersion: clusters.tmc.cloud.vmware.com/v1alpha1
kind: Extension
metadata:
name: grafana
namespace: tanzu-system-monitoring
annotations:
tmc.cloud.vmware.com/managed: "false"
spec:
description: grafana
version: "v7.0.3_vmware.1"
name: grafana
namespace: tanzu-system-monitoring
deploymentStrategy:
type: KUBERNETES_NATIVE
objects: |
apiVersion: kappctrl.k14s.io/v1alpha1
kind: App
metadata:
name: grafana
annotations:
tmc.cloud.vmware.com/orphan-resource: "true"
spec:
syncPeriod: 5m
serviceAccountName: grafana-extension-sa
fetch:
- image:
url: registry.tkg.vmware.run/tkg-extensions-templates:v1.2.0_vmware.1
template:
- ytt:
ignoreUnknownComments: true
paths:
- tkg-extensions/common
- tkg-extensions/monitoring/grafana
inline:
pathsFrom:
- secretRef:
name: grafana-data-values
deploy:
- kapp:
rawOptions: ["--wait-timeout=5m"]
kubectl apply -f tkg-extensions-v1.2.0+vmware.1/extensions/monitoring/grafana/grafana-extension.yaml
extension.clusters.tmc.cloud.vmware.com/grafana created
There will be loads of resources getting created in the tanzu-system-registry
namespace:
kubectl -n tanzu-system-monitoring get all --selector=app.kubernetes.io/name=grafana
NAME READY STATUS RESTARTS AGE
pod/grafana-65bdbc5ff8-hgq2w 2/2 Running 0 4m17s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana ClusterIP 100.69.127.167 <none> 80/TCP 5m49s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1/1 1 1 5m49s
NAME DESIRED CURRENT READY AGE
replicaset.apps/grafana-65bdbc5ff8 1 1 1 4m17s
There are also a few persistentvolumeclaims and persistentvolumes that are created to support Prometheus and AlertManager.
kubectl -n tanzu-system-monitoring get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/grafana-pvc Bound pvc-eb4a227c-ec9a-4b63-ad2b-a072f5cb3fd3 2Gi RWO standard 20m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-eb4a227c-ec9a-4b63-ad2b-a072f5cb3fd3 2Gi RWO Delete Bound tanzu-system-monitoring/grafana-pvc standard 20m
As with Prometheus, we should see a new httprpoxy resource created.
kubectl -n tanzu-system-monitoring get httpproxy --selector=app=grafana
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
grafana-httpproxy grafana.corp.tanzu grafana-tls valid valid HTTPProxy
Just like with Prometheus, after adding a DNS record pointing grafana.corp.tanzu to 10.40.14.32, we should be able to access Grafana. However, we’ll get a certificate error and I want to avoid that. We can download the certificate that was generated and then import it to the trusted root certificate authority.
kubectl -n tanzu-system-monitoring get secrets grafana-tls -o 'go-template={{ index .data "ca.crt"}}' |base64 -d
-----BEGIN CERTIFICATE-----
MIIDQTCCAimgAwIBAgIRAJ/7Daeik1BYtlIJmSQm6igwDQYJKoZIhvcNAQELBQAw
LzEYMBYGA1UEChMPUHJvamVjdCBHcmFmYW5hMRMwEQYDVQQDEwpHcmFmYW5hIENB
MB4XDTIwMTAwNTIxMDIxNloXDTMwMTAwMzIxMDIxNlowLzEYMBYGA1UEChMPUHJv
amVjdCBHcmFmYW5hMRMwEQYDVQQDEwpHcmFmYW5hIENBMIIBIjANBgkqhkiG9w0B
AQEFAAOCAQ8AMIIBCgKCAQEAttcG48vPoeeq9rTOyS2HR20akY8P9di/zPujCmLk
GS8fdDk91pEobrOjfqIGFBvn0xOKbBdZZmz9Ula0PclBUuA2eojP4MDXC4KteQie
92qmI3xDvS3dmmdTztwP6lSigAKZsyyH3MHljfRlkioeDuFqzdPcmeh1PbOHZ0d4
jyGhS7dzTDhksr2qbiZ45eIfEzah2/Mx5ucq4sHmj/rRmxuHJmXYzGWommra5Y2Z
++ZKYpPDUntwXWtGgPDBfmtXFImdOww6ApT8dwxSrt+Gk1Y0lzamjcA5bL2HxWgM
V7RfxxM196NgE0Von/lvYY2GNSQScwYQ8sliMukmzNphjwIDAQABo1gwVjAOBgNV
HQ8BAf8EBAMCAgQwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMA8GA1Ud
EwEB/wQFMAMBAf8wFAYDVR0RBA0wC4IJZ3JhZmFuYWNhMA0GCSqGSIb3DQEBCwUA
A4IBAQBsCkl3XmZ/5qajB2oHC9bGkjgUM+bg1zWxBlB3X4I10+SXhkA0CBCvhzCK
SXK1XaJBKZwW8N9p3KThVgdeZT3vagIt33QMgU80wmmdbo0xXwbBnHzPKNirhK4J
XbWYNCQlA/9LMKo2Zt8B2d4ZYMrg6A48vW+uNkWLTdj5CoTCxXzvXkG72wcydAPy
YEOC0soRCe2FRfiDOrVK4n9c6wMJDdsPsI2AEjVbJ/oXECYOap1F1gd1A5xGYKpb
o0I2yzrta4qjTSbrjEGSUc29U4SZPHMCMK3x/1QJBmTSFPRd7pMyjMhRZRT5neXV
SMgWPMyqfwgdH1MyvykBdTghZ4pA
-----END CERTIFICATE-----
You can login here as the admin user with the password specified in the grafana-data-values secret
(VMware1!).

I’ll go into deploying Dex and Gangway (since those are now official extensions) in a future blog post so you’ll be able to see how the process for configuring these applications has changed from the 1.1.x version of TKG.
Pingback: How to Configure Fluent-Bit to NOT use TLS with Splunk in TKG 1.2 – Little Stuff
Pingback: How to Configure External DNS with Microsoft DNS in TKG 1.3 (plus Harbor and Contour) – Little Stuff
Pingback: Upgrading from TKG 1.3 to 1.4 (including extensions) on vSphere – Little Stuff