How to configure external-dns with Microsoft DNS in TKG 1.3 (plus Harbor and Contour)

External-DNS is an open source project that is newly included in TKG 1.3. External-DNS synchronizes exposed Kubernetes Services and Ingresses with DNS providers. TKG 1.3 uses external-DNS to assist with service discovery as it will automatically create DNS records for httpproxy resources created via Contour in TKG. AWS (Route53), Azure, and RFC2136 (BIND) are currently supported but I’m going to focus on RFC2136 since this is what is needed to work with Microsoft DNS.

Configuring external-DNS to work with Microsoft DNS is documented on GitHub but since we need to deploy it via an extension/app, there are a few tweaks we need to make to the documented procedure.

Check DNS settings

The first thing I found that I needed to do was to check my DNS settings. There are two that you need to be sure are set appropriately or it’s not going to work.

The first is the Zone Transfers setting. By default, this is not enabled at all so I set it to allow transfers To any server.

The second setting is Dynamic Updates. Currently, external-DNS only supports insecure updates to Microsoft DNS so I had to change this setting to Nonsecure and secure.

Deploy the extension, kapp and cert-manager framework to a workload cluster

This is nearly identical to the process I documented in an earlier post, Working with TKG Extensions and Shared Services in TKG 1.2, so I won’t got into too much detail here. You obviously need to have already deployed a TKG 1.3 management cluster and a workload cluster, and switched your context to the workload cluster. I have a writeup on deploying TKG 1.3 at Installing Tanzu Kubernetes Grid 1.3 on vSphere with NSX Advanced Load Balancer if you need to start at the beginning.

Deploy the extension manager:

kubectl apply -f tkg-extensions-v1.3.0+vmware.1/extensions/tmc-extension-manager.yaml

namespace/vmware-system-tmc created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/agents.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionconfigs.intents.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionintegrations.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionresourceowners.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensions.clusters.tmc.cloud.vmware.com created
serviceaccount/extension-manager created
clusterrole.rbac.authorization.k8s.io/extension-manager-role created
clusterrolebinding.rbac.authorization.k8s.io/extension-manager-rolebinding created
service/extension-manager-service created
deployment.apps/extension-manager created

Deploy the kapp controller:

kubectl apply -f tkg-extensions-v1.3.0+vmware.1/extensions/kapp-controller.yaml

Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/tkg-system configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/kapp-controller-sa configured
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
customresourcedefinition.apiextensions.k8s.io/apps.kappctrl.k14s.io configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/kapp-controller configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/kapp-controller-cluster-role configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/kapp-controller-cluster-role-binding configured

Deploy cert-manager:

kubectl apply -f tkg-extensions-v1.3.0+vmware.1/cert-manager/

namespace/cert-manager created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
Warning: admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
Warning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

Configure and deploy external-DNS

Similar to other extensions in earlier versions of TKG, the first thing to do is to create the namespace and roles.

kubectl apply -f tkg-extensions-v1.3.0+vmware.1/extensions/service-discovery/external-dns/namespace-role.yaml

namespace/tanzu-system-service-discovery created
serviceaccount/external-dns-extension-sa created
role.rbac.authorization.k8s.io/external-dns-extension-role created
rolebinding.rbac.authorization.k8s.io/tanzu-system-service-discovery-rolebinding created
clusterrole.rbac.authorization.k8s.io/external-dns-extension-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/external-dns-extension-cluster-rolebinding created

Next we need to create a copy of the default data-values file so that we can customize it. There are a few different ones present for this extension but I’m working with the external-dns-data-values-rfc2136-with-contour.yaml.example file.

cp tkg-extensions-v1.3.0+vmware.1/extensions/service-discovery/external-dns/external-dns-data-values-rfc2136-with-contour.yaml.example external-dns-data-values-rfc2136-with-contour.yaml
@data/values
 #@overlay/match-child-defaults missing_ok=True
 externalDns:
   image:
     repository: projects.registry.vmware.com/tkg
   deployment:
     #@overlay/replace
     args:
     - --txt-owner-id=k8s
     - --provider=rfc2136
     - --rfc2136-host=192.168.0.1 #! IP of RFC2136 compatible dns server
     - --rfc2136-port=53
     - --rfc2136-zone=my-zone.example.org #! zone where services are deployed
     - --rfc2136-tsig-secret=REPLACE_ME_WITH_TSIG_SECRET #! TSIG key authorized to update the DNS server
     - --rfc2136-tsig-secret-alg=hmac-sha256
     - --rfc2136-tsig-keyname=externaldns-key
     - --rfc2136-tsig-axfr
     - --source=service
     - --source=contour-httpproxy #! export contour HTTPProxy objs
     - --domain-filter=my-zone.example.org #! zone where services are deployed

I’ll be making a number of changes to this file to allow for external DNS to communicate with my Microsoft DNS implementation.

  • setting the rfc2136-host value to controlcenter.corp.tanzu, the FQDN of my AD/DNS server
  • setting the rfc2136-zone value to corp.tanzu, the DNS zone name
  • adding the rfc-2136-insecure flag to allow external-dns to communicate without authentication
  • removing the rfc2136-tsig-secret, rfc2136-tsig-secret-alg and rfc2136-tsig-keyname flags as they won’t be needed
  • setting the domain-filter value to corp.tanzu, the DNS zone name
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
externalDns:
  image:
    repository: projects.registry.vmware.com/tkg
  deployment:
    #@overlay/replace
    args:
    - --txt-owner-id=k8s
    - --provider=rfc2136
    - --rfc2136-host=controlcenter.corp.tanzu
    - --rfc2136-port=53
    - --rfc2136-zone=corp.tanzu
    - --rfc2136-insecure
    - --rfc2136-tsig-axfr
    - --source=service
    - --source=contour-httpproxy
    - --domain-filter=corp.tanzu

One thing to note here is that the default setting for source is contour-httpproxy…this means that external-dns is only going to be monitoring for httproxy resources. You could configure this for something else if you’re not using Contour.

With these changes in place, we can use this file to create a secret that will be used for the external-dns condiguration.

kubectl create secret generic external-dns-data-values --from-file=values.yaml=external-dns-data-values-rfc2136-with-contour.yaml -n tanzu-system-service-discovery

secret/external-dns-data-values created

And now we can deploy the external-dns extension, which will in turn create all other needed components.

kubectl apply -f tkg-extensions-v1.3.0+vmware.1/extensions/service-discovery/external-dns/external-dns-extension.yaml

extension.clusters.tmc.cloud.vmware.com/external-dns created

We can check on the status of this extension from a high level by inspecting the external-dns app.

kubectl -n tanzu-system-service-discovery get app external-dns -o yaml

  inspect:
    exitCode: 0
    stdout: |-
      Target cluster 'https://100.64.0.1:443'
      07:26:34PM: debug: Resources: Ignoring group version: schema.GroupVersionResource{Group:"stats.antrea.tanzu.vmware.com", Version:"v1alpha1", Resource:"antreanetworkpolicystats"}
      07:26:34PM: debug: Resources: Ignoring group version: schema.GroupVersionResource{Group:"stats.antrea.tanzu.vmware.com", Version:"v1alpha1", Resource:"networkpolicystats"}
      Resources in app 'external-dns-ctrl'
      Namespace                       Name                            Kind                Owner    Conds.  Rs  Ri  Age
      (cluster)                       external-dns                    ClusterRole         kapp     -       ok  -   11s
      ^                               external-dns-viewer             ClusterRoleBinding  kapp     -       ok  -   11s
      ^                               tanzu-system-service-discovery  Namespace           kapp     -       ok  -   2m
      tanzu-system-service-discovery  external-dns                    Deployment          kapp     2/2 t   ok  -   11s
      ^                               external-dns                    ServiceAccount      kapp     -       ok  -   11s
      ^                               external-dns-5f7f698b8d         ReplicaSet          cluster  -       ok  -   11s
      ^                               external-dns-5f7f698b8d-q52bn   Pod                 cluster  4/4 t   ok  -   10s
      Rs: Reconcile state
      Ri: Reconcile information
      7 resources
      Succeeded
    updatedAt: "2021-03-30T19:26:34Z"
  observedGeneration: 3
  template:
    exitCode: 0
    updatedAt: "2021-03-30T19:26:34Z"

What we’re looking for in this output is that all of the resources are deployed and healthy and that the reconciliation state is Succeeded.

There will be a single pod running in the tanzu-system-service-discovery namespace.

kubectl -n tanzu-system-service-discovery get po

NAME                            READY   STATUS    RESTARTS   AGE
external-dns-5fc76fcb7b-kc8g2   1/1     Running   1          76m

You should see messages similar to the following if you check the logs for this pod:

time="2021-03-31T17:50:18Z" level=info msg="Instantiating new Kubernetes client"
time="2021-03-31T17:50:18Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2021-03-31T17:50:18Z" level=info msg="Created Kubernetes client https://100.64.0.1:443"
time="2021-03-31T17:50:19Z" level=info msg="Created Dynamic Kubernetes client https://100.64.0.1:443"
time="2021-03-31T17:50:20Z" level=info msg="Configured RFC2136 with zone 'corp.tanzu.' and nameserver '192.168.110.10:53'"

And that’s pretty much it. We can’t see that it’s actually done anything yet since there are no httpproxy resources yet, so let’s get to creating some.

Deploy Contour

Deploying Contour is very similar to the process I noted in Working with TKG Extensions and Shared Services in TKG 1.2.

The first thing we need to do is create the namespace and roles.

kubectl apply -f tkg-extensions-v1.3.0+vmware.1/extensions/ingress/contour/namespace-role.yaml

namespace/tanzu-system-ingress created
serviceaccount/contour-extension-sa created
role.rbac.authorization.k8s.io/contour-extension-role created
rolebinding.rbac.authorization.k8s.io/contour-extension-rolebinding created
clusterrole.rbac.authorization.k8s.io/contour-extension-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/contour-extension-cluster-rolebinding created

A welcome addition in 1.3 is a pre-configured data-values file for use with a load balancer. Previously, you had to add the following to get this to work:

  service:
    type: LoadBalancer
cp tkg-extensions-v1.3.0+vmware.1/extensions/ingress/contour/vsphere/contour-data-values-lb.yaml.example contour-data-values-lb.yaml

Since we don’t need to make any changes, you can just create the necessary secret from this file.

kubectl create secret generic contour-data-values --from-file=values.yaml=contour-data-values-lb.yaml -n tanzu-system-ingress

secret/contour-data-values created

And then the extension itself can be created.

kubectl apply -f tkg-extensions-v1.3.0+vmware.1/extensions/ingress/contour/contour-extension.yaml

extension.clusters.tmc.cloud.vmware.com/contour created

Since we requested a servioce of type LoadBalancer, NSX ALB should provision that for us.

kubectl -n tanzu-system-ingress get svc

NAME      TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
contour   ClusterIP      100.64.75.181                    8001/TCP                     2h
envoy     LoadBalancer   100.65.219.204   192.168.220.2   80:30849/TCP,443:32450/TCP   2h

Very shortly after creating the Contour extension, you should see your NSX ALB SEs get created (unless they already existed) as a service of type LoadBalancer is being created and SEs are needed to process this traffic. If you check out the status of this Virtual Service in the NSX ALB UI, you’ll see that it’s all red.

This is okay though since there is really nothing listening at the other end of this service yet. We have to have an httpproxy deployed before there will be anything actually listening.

Deploy Harbor

As with the other extensions, create the namespace and roles first.

kubectl apply -f tkg-extensions-v1.3.0+vmware.1/extensions/registry/harbor/namespace-role.yaml

namespace/tanzu-system-registry created
serviceaccount/harbor-extension-sa created
role.rbac.authorization.k8s.io/harbor-extension-role created
rolebinding.rbac.authorization.k8s.io/harbor-extension-rolebinding created
clusterrole.rbac.authorization.k8s.io/harbor-extension-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/harbor-extension-cluster-rolebinding created

And make a copy of the data-values file.

cp tkg-extensions-v1.3.0+vmware.1/extensions/registry/harbor/harbor-data-values.yaml.example harbor-data-values.yaml

You can use the same generate-passwords.sh script noted in my post Working with TKG Extensions and Shared Services in TKG 1.2 to autopopulate the passwords in this file and then manually update the harborAdminPassword value as well as the hostname value. The hostname value is the key piece for external-dns as it has to be in the DNS zone that it’s monitoring. In this example, the hostname is harbor.corp.tanzu which is in the corp.tanzu DNS zone, so it should be good to go.

I’m making one additional change from what I did in the past and using my own wildcard certificate that is already trusted through my environment. With this in place, the relevant portion of the file looks like the following:

hostname: harbor.corp.tanzu
# The network port of the Envoy service in Contour or other Ingress Controller.
port:
  https: 443
# [Optional] The certificate for the ingress if you want to use your own TLS certificate.
# We will issue the certificate by cert-manager when it's empty.
tlsCertificate:
  # [Required] the certificate
  tls.crt: |
    -----BEGIN CERTIFICATE-----
    MIIHiDCCBXCgAwIBAgITHQAAAAkDm8eswM8dBgAAAAAACTANBgkqhkiG9w0BAQsF
    ADBIMRUwEwYKCZImiZPyLGQBGRYFdGFuenUxFDASBgoJkiaJk/IsZAEZFgRjb3Jw
    MRkwFwYDVQQDExBDT05UUk9MQ0VOVEVSLUNBMB4XDTIxMDIwOTE2MjUyNFoXDTIz
    MDIwOTE2MjUyNFowcjELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWEx
    EjAQBgNVBAcTCVBhbG8gQWx0bzEPMA0GA1UEChMGVk13YXJlMRIwEAYDVQQLEwlU
    S0dJIDEuMTAxFTATBgNVBAMMDCouY29ycC50YW56dTCCAiIwDQYJKoZIhvcNAQEB
    BQADggIPADCCAgoCggIBAL53OJVD9TUtRpLWHqLr9OpUqF8HZOMG/L8/t8QeDzAE
    z2jIGC+ImCS2QUtf+t6NFU+1pDrcjZgfJE4KWoowQlCfwKPQr4YFSyoMN44RcmOD
    lSTfYEcjrLWWn+XyU1AUDilhoceTfdZei/1Q3mXxUZLmkrqVOjucjhpOr2gmlD55
    FEYeJBplBySsdcg9x0ey1+d/Ly7F2v4IWr91hDyNIJleUBbpF/atjhAazrRM9NLz
    H9lp7FE/EEskN1ZzChpQGdcamUEcIlr4ROTw2Jsc9zL9AEw8JoxjYlH7oIEHPVN9
    uwa7Ni3Yq9VWWFjZfhNXZQaz8aSQLpUHAgrTFPDkJNcebMFjnNR5exjTcffCV2I1
    F0pgh4A5KvGHjn2j13mtxa8W7wGtBssNFmN/q1rKnGLjwwMRI1g78KWS1zuA3Hc8
    H67YpoOV4LA31uYsdaTQvc16Qb81DKaiYAvuI4B/+f6OCEAvNIX3C/Ee3XXZB5j8
    JAtpTtBasbxAFplntvljfjlcgbsdJ+lKMUInX4xfv0J0dFTOi+xZ2BJhNhXukONV
    Po9jWNhPh1JyvhjvOWm8Mn24KcShpmiKdxKWzsEA9S5cN7RVtkMUOvcWrf8zQaKM
    +LmZ6/EBId2eLB94OxLB0n1VeFXsxNe42xUEGxieY/4LCG7laTvRdRWi4aCwK/VJ
    AgMBAAGjggI/MIICOzAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUH
    AwEwFwYDVR0RBBAwDoIMKi5jb3JwLnRhbnp1MB0GA1UdDgQWBBSb5YTxUuhj65vF
    ZwneQHFNhGJa+jAfBgNVHSMEGDAWgBTFAxSvY64Q5adhm8IYecHBAUuobzCB0wYD
    VR0fBIHLMIHIMIHFoIHCoIG/hoG8bGRhcDovLy9DTj1DT05UUk9MQ0VOVEVSLUNB
    LENOPWNvbnRyb2xjZW50ZXIsQ049Q0RQLENOPVB1YmxpYyUyMEtleSUyMFNlcnZp
    Y2VzLENOPVNlcnZpY2VzLENOPUNvbmZpZ3VyYXRpb24sREM9Y29ycCxEQz10YW56
    dT9jZXJ0aWZpY2F0ZVJldm9jYXRpb25MaXN0P2Jhc2U/b2JqZWN0Q2xhc3M9Y1JM
    RGlzdHJpYnV0aW9uUG9pbnQwgcEGCCsGAQUFBwEBBIG0MIGxMIGuBggrBgEFBQcw
    AoaBoWxkYXA6Ly8vQ049Q09OVFJPTENFTlRFUi1DQSxDTj1BSUEsQ049UHVibGlj
    JTIwS2V5JTIwU2VydmljZXMsQ049U2VydmljZXMsQ049Q29uZmlndXJhdGlvbixE
    Qz1jb3JwLERDPXRhbnp1P2NBQ2VydGlmaWNhdGU/YmFzZT9vYmplY3RDbGFzcz1j
    ZXJ0aWZpY2F0aW9uQXV0aG9yaXR5MCEGCSsGAQQBgjcUAgQUHhIAVwBlAGIAUwBl
    AHIAdgBlAHIwDQYJKoZIhvcNAQELBQADggIBAAJq4Ix7Kd+Nz9ksBsdLbYOITux3
    CznnBSALkUAu5aL5PfJM2ww0Z54aOO1PH74jxc/1GQ5MM+xdd12JRklwB76MLXST
    8gWrWp229rCDA57qR5NgPY44rRM935WnnMoopQjdJTBveYvzFs8202E6yf4REdsx
    RVr7T9fhPz/hkR3tblTdinKeMM1QLN4C2NUjeqXSciham6KpwfPvcB4Ifhfb0PP7
    aQ6xbeEyGCc7y2Hj/DP52o64shGvEj4nM72xQhHT/1huXUuX3b1FH1+c8luZsker
    s2hrbUwJiMaOP4dY1NhhLsJJMDHr9RZSEgNVl7XHtpMM0Qp4nYL4Xz6W85phqTgF
    n8yt+NOeYEt7zuA9kK1/RSTTErXXpNfwTiJWQg3GqYlQ+mfwmjbAaCZ8r802ueNI
    hXZjvRtg/uHyl/GYp/WVemygw1XUAUIosUOEY7v+rvvPurN9K0qgcD5zTl/bsV/y
    5EFc+Q0KzSIV5CLfejwVJs40QdupWffXHOYqm49zT8ejffEExUBxXH/b4rooumkc
    hpsrx5hbo/XJvS7ZbXCH/k8kDq8+9o4QEVjqYyVwA/F3+/Mv2ywGLwKY5B+WvJQt
    LrxsDU58LYfVcwKSuryS5Rv9Kh0tZcFH2zpzQJDgMoZqPqZHFxhiV+w4KAD7WQxd
    R22CcKK+kduUjv0X
    -----END CERTIFICATE-----
  tls.key: |
    -----BEGIN PRIVATE KEY-----
    MIIJRAIBADANBgkqhkiG9w0BAQEFAASCCS4wggkqAgEAAoICAQC+dziVQ/U1LUaS
    1h6i6/TqVKhfB2TjBvy/P7fEHg8wBM9oyBgviJgktkFLX/rejRVPtaQ63I2YHyRO
    ClqKMEJQn8Cj0K+GBUsqDDeOEXJjg5Uk32BHI6y1lp/l8lNQFA4pYaHHk33WXov9
    UN5l8VGS5pK6lTo7nI4aTq9oJpQ+eRRGHiQaZQckrHXIPcdHstfnfy8uxdr+CFq/
    dYQ8jSCZXlAW6Rf2rY4QGs60TPTS8x/ZaexRPxBLJDdWcwoaUBnXGplBHCJa+ETk
    8NibHPcy/QBMPCaMY2JR+6CBBz1TfbsGuzYt2KvVVlhY2X4TV2UGs/GkkC6VBwIK
    0xTw5CTXHmzBY5zUeXsY03H3wldiNRdKYIeAOSrxh459o9d5rcWvFu8BrQbLDRZj
    f6taypxi48MDESNYO/Clktc7gNx3PB+u2KaDleCwN9bmLHWk0L3NekG/NQymomAL
    7iOAf/n+jghALzSF9wvxHt112QeY/CQLaU7QWrG8QBaZZ7b5Y345XIG7HSfpSjFC
    J1+MX79CdHRUzovsWdgSYTYV7pDjVT6PY1jYT4dScr4Y7zlpvDJ9uCnEoaZoincS
    ls7BAPUuXDe0VbZDFDr3Fq3/M0GijPi5mevxASHdniwfeDsSwdJ9VXhV7MTXuNsV
    BBsYnmP+Cwhu5Wk70XUVouGgsCv1SQIDAQABAoICAQCE843Ay943j3Iq/2IFUfX1
    OMELDItE2lTFX0H0mRL67vCk8L/JNm0Ve09awRXKEetlZ6LLH7eLD3n1K88FlShF
    RS5ga0SKpdlQ8ZQ6DD2v72LFiVOYdPOTEiBtj9jOFiHIiwk12ePGJttLKQ8FVA0g
    IOkdaxtqDx82h+RzLDLg5P3c8B89eXYiCGxzKYSYrON/Cc2ytZPnLYfDC9IRvmWa
    CTaYt37txzpaTYwqWWmwctuxlPnLwNyrxw0FwGm18mIHP97ojy4AGDtnICPjKrX3
    lpmFnZs+9gTku2PPjXEmfaZ2zWnFWPChi5NB+hfCgofXxPYRbD/H8UtgqPV+LZL0
    jP6SV1DTbSVl3AIrFtCwWGzpTYLN9kxqRNIlU3hHZorErXQo0GebOqwKEjP81TFQ
    oBcBeRoWTkzZAl4oqS2FywXPEuUP21ChjK0jjW/VN267A23cS4IBhR7Cn825eg7/
    ZRcW82/i9EfyxFXi1dTRBP98gEj2ls6coLmbPIZHNqG73DMxk9cI3LfjGGu3rbfR
    TbqaNzsznYluiHlVWzjk3JKLvCBonGSD6ZhJ5OZkak8+Vl+rzI+9rWBJTBZihlwy
    +QeHZGq9htTAQ5F7KO9Tn7D79e7wIwZf2DHMweRGX9dUfkOaHCwwcSjVH/yziXtR
    aJ85akugMq/BciUBrffl/QKCAQEA8mmyuAbVecVRpcRVvH6auEAkL0yLp8rfAnm0
    ToD/yQE2GOPSB4uucamPBd33wNQA5UDUc0GmqqOAN6ne2fT3ZtzebRobdpvuYUu1
    XWlCvribR1zLgsglF2gTlOwOO1exZOlwax8a93xeBN3cQBXTisfXoMPbE80DTEqi
    +CPNnMBXKP6JOabDaWD+XvfNEe7PlQcgL7BKiujj98Ui/J/ebnITgpksGDNyaUIG
    62RwZeQOka6i6SMuCaD+Da3LjFfg9MAyOC1hsfst2puTkkJE+tJHRHOMSj6bZuW1
    R6rwe6SqrnhaBdDdUzZmONOVaJBCrQ9YE3ZiECH3cS3OQvDDDwKCAQEAySQmb+R7
    Ouk9IOh1HAZkLMWAPZ6FsZ916iMPnwcv7UADocRCzz3QtQOcI7lv242rrhLhVG/H
    fZMZh9aAL0Wm8i1/mrpAIYyMiNQJvWs4aY2bb6rlIL9mR0iThbVZXfyeRLvfyMpy
    O6PWYt8WF9dWLE7v3bW5Maqtqc+OND+1TGWTd0eZSSQgnR1VeFVNZSFG88vpmJDR
    73POVbMEKpxYfe7heZ0dApcb/IA1a3Zqqz0cZ4uqu1dWehwtb40dYmaqswbY6ke8
    3HKGQSBmlxWUF7Nn9Zg79u5YVW2jLOoMgUv3dDGAOIHlC97soA6NtoH/VhzY635+
    8+sX3wktvgXiJwKCAQEAhDCzbrr7So4ZegXYoxN/F56SnOBm/7cXaWgotO6PjXMF
    pwkFDWxUUlMeVRq38gUp/9ocgEV6t261iqUtizmUeBlVibVE6KcblR8N5cRyy0Is
    Gvw1VjoCUANHOlyHXkDx0Y+i6CdsMy00r/60DpZYZ0OXCGoFW4TemYnR2PLdOu+A
    GDDFcBTKVvq3e94xi+foduIN4TOHUryxI/nynEQprZyzmvIgI4pah5+j2lVJHacB
    ctwCppOylTmfkKIHb560Y4MzX4MP1VidpqpUDNvqdcSZbHB+PjZp0/DLrCtBPIuN
    L9sdbDJ7ntb5Y1+uB/kzAuBtLR/PVfDP2H4cDlDwbQKCAQBaRz51REDHLT6BkbRW
    csvtiGvJvGfXVHIRN9FgGFK7ktrOdY9jAyS0yjz/j9CT459lzxWR12Xbh/WSkYUR
    Mpr+4cr/QI9eP34oP7traD92qNdWJIcYzq9yWTHVdpL461SCFy0XKz5gZGXqFKUO
    6FjGJFvm0BSiJTAzInR6IQoXkxPAGsPDH1MAEdV14BuPw4LcE+7xyjZf2kOHFYVO
    NsRFKb3L3ufRbM9j4ouXgxvXZeNk2jw0P7wRrKn8AoNo0hnVpsIfTTmIXGLDwm4p
    a8b/aEfF5KEtcMb2+PGfTCF2uwkC/uDE/BA45sKgCEg03V4kYWg/MpR6mE8rjSwZ
    uPxLAoIBAQCPo0w3rHZMdBmaM9TCWfgCvMNuwhlWfPjQnYgpiPrVqnue1RdMLthY
    eETTu+yxL+IiA5u46iJmM60p1nPw2IXWODJ634AbWZgNmZifqWr4Hm8DXCzrYBNp
    cwB/NhcyJGX1eTZdooiwzdqukU5TCDRnPxNPv+TVUFcIsdmPHSJlZgkXiKh0JMbN
    l8JjE2rjKUQc61kIoll+MNDoW5uCakeb5K0SRuxPHpGC1+6hzW3zQNa+kGd85b5y
    zkrdsBEJ8/YXb9g+lId2Qpaj1MO6lgJTwLUkKTsDZ9hBindmBVAlAnVQzRxKeald
    I2b/u5gfwdfn/3z+JNpdcdc1A4cX7Qdi
    -----END PRIVATE KEY-----
  # [Optional] the certificate of CA, this enables the download
  # link on portal to download the certificate of CA
  ca.crt: |
    -----BEGIN CERTIFICATE-----
    MIIFazCCA1OgAwIBAgIQMfZy08muvIVKdZVDz7/rYzANBgkqhkiG9w0BAQsFADBI
    MRUwEwYKCZImiZPyLGQBGRYFdGFuenUxFDASBgoJkiaJk/IsZAEZFgRjb3JwMRkw
    FwYDVQQDExBDT05UUk9MQ0VOVEVSLUNBMB4XDTIwMDgxOTE3MjA0NFoXDTMwMDgx
    OTE3MzAzNVowSDEVMBMGCgmSJomT8ixkARkWBXRhbnp1MRQwEgYKCZImiZPyLGQB
    GRYEY29ycDEZMBcGA1UEAxMQQ09OVFJPTENFTlRFUi1DQTCCAiIwDQYJKoZIhvcN
    AQEBBQADggIPADCCAgoCggIBALKIdX7643PzvtVXlqNIwDuNq+rhcHF0fjR414j+
    1IGQUuXrykjhSDthPP+8BGN7mBgHT8AjAS1b95xc8B0S2Fhln3AoRE9z03GtfsBu
    FSBRUVwAifX6oXu97WzffhqPtxZfLJXbhOomjlkX6iffAs2TOLUx2Oj4w2vybhzj
    lcA70ai+0Sl6axSo3lMZ4KkuZ2WgfEcaDjjj33/pV3/bnFK+7ydPttc2Tek5xsI8
    XNMirIVxUiUT4YLy4WLiS200JUfbp1ZnMvnbQ8Jv1QnZl9W7WmBPcgxR4AAub0K4
    vZLXu6MXiboTlzkMB/YthCkTNlJcKkhHf60YR/T6Sx1T2nupyBa4deo5UGPzhRiJ
    pN37uqqAdK1qMDpCjARjS6U7Lf9JKjfiriLzLeyAjP8kaN4TdHSZd0pcQoZSxexQ
    9n+4E4MQm4EJ4DrVZCilsyL2BdETcHXKPc7q+Db4XM7jPKNG5GP1EMV4Xohv58yZ
    /rRfmK64gar8AMnOKT2AP681qdZs7lljONcXUALzlX5TqIchYT0DVQmFLYoMBeZz
    0l21QjbK0YWnPza6Yi/N4m6rFbEB4WXiqhYSkxzrMvocVUgd4AAP1vfHNnFEsnUR
    nSsiglFH/xlyO3cBFrmoZAxbA2091XHWhB4c0mQEI3hOqAB8UoFGBrQpmQ+LesoC
    1LZ9AgMBAAGjUTBPMAsGA1UdDwQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud
    DgQWBBTFAxSvY64Q5adhm8IYecHBAUuobzAQBgkrBgEEAYI3FQEEAwIBADANBgkq
    hkiG9w0BAQsFAAOCAgEAjg/v4mIP7gBVCw4pemtGn3PStDh/aB9vbWyjAyxSNaaH
    H0nID5q5wow9ueBiDfjTPnhbf3P768HG8oL/+9C+Vm/0liFBd+0/DaayKpANFMLB
    BV+s2adWRhQucLQfXPwum8RybWv82wkRkWCCdOBaAvAMuTgk08SwJIyQfVgpk3nY
    0OwjFwSAadvevf+LoD/9L8R9NEt/n4WJe+LtEamo9EVb+l+cYqyxyubAVY0Y6BM2
    GXqAh3FEW2aQMpwouh/5S7w5oSMYN6miY1ojki8gPm0+4+CILPWh/fr2q0O/bPtb
    Tr++nPMmZ8ov9epNGIuqhtk5ja2/JuY+RW46IRc8QpF1EyUae02E6U2Vacs7Gge2
    CeSINkoLFFmiKBfIn/HAchlme9aL6DlJ9wAreBDH3E8kH7gRDWbSK2/QD0Hqac+E
    geGHwpg/8OtBOHUMnM7eLOXBJFcJosWf0XnEgS4ubgaHgqDEu8p8PE7rpCxtUNur
    t+x2xONI/rBWgdbp51lPr7o819zPJCvYZq1Pp1st8fb3RlUSWvbQMPFtGAyaBy+G
    0RgZ9WPtyEYgnHAb5/Dq46sne9/QnPwwGpjv1s1oE3ZFQjhvnGis8+dqRxk3YZAk
    yiDghW7antzYL9S1CC8sVgVOwFJwfFXpdiir35mQlySG301V4FsRV+Z0cFp4Ni0=
    -----END CERTIFICATE-----
# Use contour http proxy instead of the ingress when it's true
enableContourHttpProxy: true

# [Required] The initial password of Harbor admin.
harborAdminPassword: VMware1!

And now we can create the needed secret from this file

kubectl create secret generic harbor-data-values --from-file=values.yaml=harbor-data-values.yaml  -n tanzu-system-registry

secret/harbor-data-values created

And finally we can deploy the Harbor extension.

kubectl apply -f tkg-extensions-v1.3.0+vmware.1/extensions/registry/harbor/harbor-extension.yaml

extension.clusters.tmc.cloud.vmware.com/harbor created

With Harbor deployed, we should now see an httpproxy resource in the tanzu-system-registry namespace.


kubectl -n tanzu-system-registry get httpproxy

NAME                      FQDN                       TLS SECRET   STATUS   STATUS DESCRIPTION
harbor-httpproxy          harbor.corp.tanzu          harbor-tls   valid    Valid HTTPProxy
harbor-httpproxy-notary   notary.harbor.corp.tanzu   harbor-tls   valid    Valid HTTPProxy

If we check out the logs for the external-dns pod, we’ll see that it has created the harbor.corp.tanzu and notary.harbor.corp.tanzu records.

kubectl -n tanzu-system-service-discovery logs external-dns-59d47f9588-whvnp

time="2021-03-30T22:54:43Z" level=info msg="Adding RR: harbor.corp.tanzu 0 A 192.168.220.2"
time="2021-03-30T22:54:43Z" level=info msg="Adding RR: notary.harbor.corp.tanzu 0 A 192.168.220.2"
time="2021-03-30T22:54:43Z" level=info msg="Adding RR: harbor.corp.tanzu 0 TXT \"heritage=external-dns,external-dns/owner=k8s,external-dns/resource=HTTPProxy/tanzu-system-registry/harbor-httpproxy\""
time="2021-03-30T22:54:43Z" level=info msg="Adding RR: notary.harbor.corp.tanzu 0 TXT \"heritage=external-dns,external-dns/owner=k8s,external-dns/resource=HTTPProxy/tanzu-system-registry/harbor-httpproxy-notary\""

And back in the DNS application, we can see that the new records were successfully created.

As you would expect, harbor.corp.tanzu now resolves to 192.168.220.2, the IP address assigned by NSX ALB, to Contour.

dig harbor.corp.tanzu

; <<>> DiG 9.16.6-Ubuntu <<>> harbor.corp.tanzu
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35418
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;harbor.corp.tanzu.             IN      A

;; ANSWER SECTION:
harbor.corp.tanzu.      0       IN      A       192.168.220.2

;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Wed Mar 31 12:54:00 MDT 2021
;; MSG SIZE  rcvd: 62

Harbor is accessible via it’s FQDN and no certificate warning is thrown.

Since there is now something listening at the other end of our Virtual Service, the status is no longer red in the NSX ALB UI.

What I wanted to do

While this is great, and certainly saves you the time of having to manually create DNS records, I was hoping to be able to pull this off with secure dynamic updates. There are instructions for this but it turns out that the methods have not been implemented yet (based on a recent issue). I was able to put together how this would look once the feature is available in external DNS.

A configmap is needed that provides the configuration information which external-dns will use to establish communication securely with Microsoft DNS. The only thing in this that is customized is the DNS zone name (corp.tanzu) and the AD/DNS server names (both are controlcenter.corp.tanzu).

apiVersion: v1
kind: ConfigMap
metadata:
  name: krb5.conf
data:
  krb5.conf: |
    [logging]
    default = FILE:/var/log/krb5libs.log
    kdc = FILE:/var/log/krb5kdc.log
    admin_server = FILE:/var/log/kadmind.log

    [libdefaults]
    dns_lookup_realm = false
    ticket_lifetime = 24h
    renew_lifetime = 7d
    forwardable = true
    rdns = false
    pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
    default_ccache_name = KEYRING:persistent:%{uid}

    default_realm = CORP.TANZU

    [realms]
    CORP.TANZU = {
      kdc = controlcenter.corp.tanzu
      admin_server = controlcenter.corp.tanzu
    }

    [domain_realm]
    corp.tanzu = CORP.TANZU
    .corp.tanzu = CORP.TANZU

The data-values file will look a little bit different. I’ve removed the rfc2136-insecure flag and added rfc2136-gss-tsig (telling it to use the authentication mechanism that works with Microsoft DNS), rfc2136-kerberos-realm (DNS zone), rfc2136-kerberos-username (administrative user for the DNS zone) and rfc2136-kerberos-password (password for the administrative user). You’ll also notice that volumeMounts and volumes sections are present so that the external-dns pod can make use of the previously created configmap.

#@data/values
#@overlay/match-child-defaults missing_ok=True
---
externalDns:
  image:
    repository: projects.registry.vmware.com/tkg
  deployment:
    #@overlay/replace
    args:
    - --txt-owner-id=k8s
    - --provider=rfc2136
    - --rfc2136-host=controlcenter.corp.tanzu
    - --rfc2136-port=53
    - --rfc2136-zone=corp.tanzu
    - --rfc2136-gss-tsig
    - --rfc2136-kerberos-realm=corp.tanzu
    - --rfc2136-kerberos-username=administrator
    - --rfc2136-kerberos-password=VMware1!
    - --rfc2136-tsig-axfr
    - --source=service
    - --source=contour-httpproxy
    - --domain-filter=corp.tanzu
    volumeMounts:
    #@overlay/append
    - mountPath: /etc/krb5.conf
      name: kerberos-config-volume
      subPath: krb5.conf
    volumes:
    #@overlay/append
    - configMap:
        defaultMode: 420
        name: krb5.conf
      name: kerberos-config-volume

Hopefully we’ll see external-dns updated in the future so that this functionality can be used.

UPDATE: This is now working in TKG 1.4 as it’s using the 0.8.0 version of external-dns. you can read about the External DNS section of my newer post, Upgrading from TKG 1.3 to 1.4 (including extensions) on vSphere.

3 thoughts on “How to configure external-dns with Microsoft DNS in TKG 1.3 (plus Harbor and Contour)”

  1. Pingback: Upgrading from TKG 1.3 to 1.4 (including extensions) on vSphere – Little Stuff

    1. Don’t take my word as gospel on this since I’ve never actually tried it but since external-dns itself supports creating CNAME records, you should be able to do the same in this type of implementation. I imagine that you would need to pass in some different arguments to your external-dns-data-values-rfc2136-with-contour.yaml file from what I have done. I would suggest reading more about it at https://github.com/kubernetes-sigs/external-dns and then experimenting in your TKG cluster.

Leave a Comment

Your email address will not be published. Required fields are marked *