Installing Harbor Image Registry to a vSphere 8 with Tanzu Supervisor Cluster

I’ve done a number of posts related to the Harbor image registry in the past but never one on enabling the Image Registry (Harbor) service on a vSphere with Tanzu supervisor cluster. I’ve done it on older versions of vSphere with Tanzu but decided to document the process in my recent vSphere 8 with Tanzu deployment.

You have to be using NSX-backed networking and have a supervisor created before you can enable the Image Registry service. You can read more about getting these out of the way in my previous two posts, vSphere 8 with Tanzu and NSX Installation and Supervisor Cluster Namespaces and Workloads.

Installing Harbor on vSphere 8 with Tanzu is one of the easier tasks to accomplish. You’ll find the option to enable “Image Registry” (Harbor) at Workload ManagementSupervisors, your supervisor cluster (svc1 in this example), Image Registry.

The only thing you can specify is which storage policy to use.

As soon as you click the OK button, numerous tasks get kicked off in vCenter (this is only a sample):

And in a short amount of time you will see a new namespace created with several harbor-related VMs/pods (vSphere pods).

After several minutes, the Image Registry page should show that Harbor has been deployed and is in a running state.

Also on this page is the URL where you can get to the Harbor UI (this IP address is taken from the ingress CIDR and allocated as a load balancer in NSX).

You can see from the URL bar in the screenshot that this is a trusted connection (lock icon). This is because my vCenter Server is a subordinate certificate authority to the Microsoft CA in my environment. The CA is trusted by all components in my lab so the certificate generated for Harbor is trusted too. If you’re using a more default installation and relying on self-signed certificates this will not be a trusted connection.

You should be able to log in to Harbor as any user who has privileges on the supervisor cluster.

There is already a project created for the test namespace that was created earlier.

You can examine the objects created in the vmware-system-registry-192569251 namespace:

kubectl -n vmware-system-registry-192569521 get po,pv,pvc,svc -o wide
NAME                                                      READY   STATUS    RESTARTS   AGE   IP            NODE               NOMINATED NODE   READINESS GATES
pod/harbor-192569521-harbor-core-545c5bd57d-6z8xj         1/1     Running   0          94m   10.244.0.38   esx-04a.corp.vmw   <none>           <none>
pod/harbor-192569521-harbor-database-0                    1/1     Running   0          94m   10.244.0.34   esx-04a.corp.vmw   <none>           <none>
pod/harbor-192569521-harbor-jobservice-6ddbc699f8-ln9jr   1/1     Running   0          94m   10.244.0.35   esx-04a.corp.vmw   <none>           <none>
pod/harbor-192569521-harbor-nginx-5965ddb478-f8q66        1/1     Running   0          94m   10.244.0.37   esx-04a.corp.vmw   <none>           <none>
pod/harbor-192569521-harbor-portal-59f6754f7b-mwgk2       1/1     Running   0          94m   10.244.0.36   esx-04a.corp.vmw   <none>           <none>
pod/harbor-192569521-harbor-redis-0                       1/1     Running   0          94m   10.244.0.39   esx-04a.corp.vmw   <none>           <none>
pod/harbor-192569521-harbor-registry-7cdf658cdd-7kwfx     2/2     Running   0          94m   10.244.0.40   esx-04a.corp.vmw   <none>           <none>
 
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                               STORAGECLASS   REASON   AGE   VOLUMEMODE
persistentvolume/pvc-02b9f748-a235-4589-805c-1bc9c85ab4e0   6Gi        RWO            Delete           Bound    vmware-system-registry-192569521/data-harbor-192569521-harbor-redis-0               k8s-policy              94m   Filesystem
persistentvolume/pvc-1950707f-92ad-4281-b254-be53f1651298   180Gi      RWO            Delete           Bound    vmware-system-registry-192569521/harbor-192569521-harbor-registry                   k8s-policy              94m   Filesystem
persistentvolume/pvc-2b3ea0a5-bcf0-44ab-97f3-55f8734552a7   8Gi        RWO            Delete           Bound    vmware-system-registry-192569521/database-data-harbor-192569521-harbor-database-0   k8s-policy              94m   Filesystem
persistentvolume/pvc-a0082f4f-c2cd-4c96-a9ab-6ae28f3a30cd   6Gi        RWO            Delete           Bound    vmware-system-registry-192569521/harbor-192569521-harbor-jobservice                 k8s-policy              94m   Filesystem
 
NAME                                                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
persistentvolumeclaim/data-harbor-192569521-harbor-redis-0               Bound    pvc-02b9f748-a235-4589-805c-1bc9c85ab4e0   6Gi        RWO            k8s-policy     94m   Filesystem
persistentvolumeclaim/database-data-harbor-192569521-harbor-database-0   Bound    pvc-2b3ea0a5-bcf0-44ab-97f3-55f8734552a7   8Gi        RWO            k8s-policy     94m   Filesystem
persistentvolumeclaim/harbor-192569521-harbor-jobservice                 Bound    pvc-a0082f4f-c2cd-4c96-a9ab-6ae28f3a30cd   6Gi        RWO            k8s-policy     94m   Filesystem
persistentvolumeclaim/harbor-192569521-harbor-registry                   Bound    pvc-1950707f-92ad-4281-b254-be53f1651298   180Gi      RWO            k8s-policy     94m   Filesystem
 
NAME                                         TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE   SELECTOR
service/harbor-192569521                     LoadBalancer   10.96.1.158   10.40.14.68   443:31677/TCP       97m   app=harbor,component=nginx
service/harbor-192569521-harbor-core         ClusterIP      10.96.0.31    <none>        80/TCP              94m   app=harbor,component=core,release=harbor-192569521
service/harbor-192569521-harbor-database     ClusterIP      10.96.1.59    <none>        5432/TCP            94m   app=harbor,component=database,release=harbor-192569521
service/harbor-192569521-harbor-jobservice   ClusterIP      10.96.1.228   <none>        80/TCP              94m   app=harbor,component=jobservice,release=harbor-192569521
service/harbor-192569521-harbor-portal       ClusterIP      10.96.0.208   <none>        80/TCP              94m   app=harbor,component=portal,release=harbor-192569521
service/harbor-192569521-harbor-redis        ClusterIP      10.96.1.227   <none>        6379/TCP            94m   app=harbor,component=redis,release=harbor-192569521
service/harbor-192569521-harbor-registry     ClusterIP      10.96.1.43    <none>        5000/TCP,8080/TCP   94m   app=harbor,component=registry,release=harbor-192569521

In NSX, a new load balancer is created to supply the external IP address for Harbor.

Looking at the associated virtual server you can see the 10.40.14.68 IP address noted on the Image Registry page and in the LoadBalancer service via kubectl get svc.

The associated server pool shows the IP address of the harbor-nginx pod.

When Workload Management was initially deployed, you should have opened a browser to the IP address (or FQDN) of API endpoint for the cluster (10.40.14.66, wcp.corp.vmw in this example) so that you could download the kubectl vSphere plugin. If you scroll farther down that same page you will find information about downloading a docker credential helper.

Use the Select Operating System dropdown to set your target OS and then download the appropriate docker credential helper. You should have a file named vsphere-docker-credential-helper.zip which has the docker-credential-vsphere executable file in it. Copy this to the system where you will be running docker commands (and make executable if on Linux).

You can use the docker credential helper file to securely log in to the recently deployed Harbor instance.

kubectl config use-context test
Switched to context "test".
 
docker-credential-vsphere login 10.40.14.68 -u vmwadmin@corp.vmw
Password:
INFO[0010] Fetched username and password
INFO[0011] Fetched auth token
INFO[0011] Saved auth token

As noted earlier, the certificate created for Harbor is signed by my internal CA and trusted throughout my lab. If you’re using self-signed certificates, accessing docker at the command line will probably fail with an error similar to “certificate signed by unknown authority”. To work around this, you’ll need to import your certificate so that the docker CLI will trust it. This can easily be remedied in two steps:

mkdir -p /etc/docker/certs.d/10.40.14.68 (replace 10.40.14.68 with the IP of your Harbor installation)
cp <ca.crt> /etc/docker/certs.d/10.40.14.68/ca.crt

To show functionality of this image registry, we can re-deploy the nginx instance used in my previous post after reconfiguring it to pull the needed container images from Harbor. We’ll need to pull the appropriate nginx and ubuntu images first, tag them for the test project, and then push them up to Harbor.

docker pull nginx:1.17.6
1.17.6: Pulling from library/nginx
8ec398bc0356: Pull complete
465560073b6f: Pull complete
f473f9fd0a8c: Pull complete
Digest: sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2
Status: Downloaded newer image for nginx:1.17.6
docker.io/library/nginx:1.17.6
 
docker pull ubuntu:bionic
bionic: Pulling from library/ubuntu
0c5227665c11: Pull complete
Digest: sha256:8aa9c2798215f99544d1ce7439ea9c3a6dfd82de607da1cec3a8a2fae005931b
Status: Downloaded newer image for ubuntu:bionic
docker.io/library/ubuntu:bionic

docker tag nginx:1.17.6 10.40.14.68/test/nginx:1.17.6
docker push 10.40.14.68/test/nginx:1.17.6
The push refers to repository [10.40.14.68/test/nginx]
75248c0d5438: Pushed
49434cc20e95: Pushed
556c5fb0d91b: Pushed
1.17.6: digest: sha256:36b77d8bb27ffca25c7f6f53cadd059aca2747d46fb6ef34064e31727325784e size: 948
 
docker tag ubuntu:bionic 10.40.14.68/test/ubuntu:bionic
docker push 10.40.14.68/test/ubuntu:bionic
The push refers to repository [10.40.14.68/test/ubuntu]
b7e0fa7bfe7f: Pushed
bionic: digest: sha256:08017e3a80639cb457e92c5a39c6523e1d052bd7ead3ece10e83fbfc3001eb7b size: 529

You can check the test project in Harbor to see that nginx and ubuntu are now present.

To be able to pull from Harbor, you will need to use an image pull secret in the container spec. Whenever a new project is created in Harbor, image pull and push secrets are created automatically in the associated namespace.

kubectl -n test get secrets
NAME                             TYPE                             DATA   AGE
test-default-image-pull-secret   kubernetes.io/dockerconfigjson   1      21m
test-default-image-push-secret   kubernetes.io/dockerconfigjson   1      21m

test-default-image-pull-secret

spec:
  volumes:
    - name: nginx-logs
      persistentVolumeClaim:
       claimName: nginx-logs
  containers:
  - image: 10.40.14.68/test/nginx:1.17.6
    name: nginx
    ports:
    - containerPort: 80
    volumeMounts:
      - mountPath: "/var/log/nginx"
        name: nginx-logs
        readOnly: false
  - image: 10.40.14.68/test/ubuntu:bionic
    name: fsfreeze
    securityContext:
      privileged: true
    volumeMounts:
      - mountPath: "/var/log/nginx"
        name: nginx-logs
        readOnly: false
    command:
      - "/bin/bash"
      - "-c"
      - "sleep infinity"
  imagePullSecrets:
  - name: test-default-image-pull-secret

The yaml definition (with-pv.yaml) for the nginx deployment will need to be modified. The image location should be updated to reflect being stored in Harbor and the image pull secret needs to be specified. The following is the relevant section of with-pv.yaml after the changes have been made.

spec:
  volumes:
    - name: nginx-logs
      persistentVolumeClaim:
       claimName: nginx-logs
  containers:
  - image: 10.40.14.68/test/nginx:1.17.6
    name: nginx
    ports:
    - containerPort: 80
    volumeMounts:
      - mountPath: "/var/log/nginx"
        name: nginx-logs
        readOnly: false
  - image: 10.40.14.68/test/ubuntu:bionic
    name: fsfreeze
    securityContext:
      privileged: true
    volumeMounts:
      - mountPath: "/var/log/nginx"
        name: nginx-logs
        readOnly: false
    command:
      - "/bin/bash"
      - "-c"
      - "sleep infinity"
  imagePullSecrets:
  - name: test-default-image-pull-secret

Note: You can see the entire, original with-pv.yaml file in my previous post, Supervisor Cluster Namespaces and Workloads.

The nginx deployment can now be created.


kubectl apply -f with-pv.yaml
persistentvolumeclaim/nginx-logs created
deployment.apps/nginx-deployment created
service/my-nginx created

Similar to what was done previously, you can check to see that the key objects have been created.

kubectl get po,pv,pvc,svc -o wide
NAME                                    READY   STATUS    RESTARTS   AGE    IP            NODE               NOMINATED NODE   READINESS GATES
pod/nginx-deployment-555bf4ff65-wxg98   2/2     Running   0          2m2s   10.244.0.34   esx-03a.corp.vmw   <none>           <none>
 
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                               STORAGECLASS   REASON   AGE     VOLUMEMODE
persistentvolume/pvc-3ed696eb-6038-42ef-a32a-206fa413773b   50Mi       RWO            Delete           Bound    test/nginx-logs                                                                     k8s-policy              2m1s    Filesystem
persistentvolume/pvc-7902011d-b238-461a-a0b2-7e2af4e82e21   6Gi        RWO            Delete           Bound    vmware-system-registry-669319617/data-harbor-669319617-harbor-redis-0               k8s-policy              7h52m   Filesystem
persistentvolume/pvc-b2f415f7-61fb-47f2-ab3e-fd5ac2d2ba8d   8Gi        RWO            Delete           Bound    vmware-system-registry-669319617/database-data-harbor-669319617-harbor-database-0   k8s-policy              7h52m   Filesystem
persistentvolume/pvc-cdfdaa1d-17d9-4b45-978f-9cc2e7fa0ea0   6Gi        RWO            Delete           Bound    vmware-system-registry-669319617/harbor-669319617-harbor-jobservice                 k8s-policy              7h52m   Filesystem
persistentvolume/pvc-ff1bfc40-fd2e-4e2f-a7a5-5e0f71e08273   180Gi      RWO            Delete           Bound    vmware-system-registry-669319617/harbor-669319617-harbor-registry                   k8s-policy              7h52m   Filesystem
 
NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE    VOLUMEMODE
persistentvolumeclaim/nginx-logs   Bound    pvc-3ed696eb-6038-42ef-a32a-206fa413773b   50Mi       RWO            k8s-policy     2m2s   Filesystem
 
NAME               TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE    SELECTOR
service/my-nginx   LoadBalancer   10.96.0.133   10.40.14.69   80:32296/TCP   2m2s   app=nginx

And taking a closer look at the actual deployment, you can see where the container images came from.

kubectl get deployment nginx-deployment -o jsonpath="{..image}"
10.40.14.68/test/nginx:1.17.6
10.40.14.68/test/ubuntu:bionic

Back in the Harbor UI, you can see that the pull count has increased from 0 now that these images have been used.

When you no longer need to be pushing/pulling from Harbor, you can logout.

docker-credential-vsphere logout 10.40.14.68
INFO[0000] Deleted auth token

You don’t have to use the docker credential helper to access Harbor. You can use a simple docker login but it’s a much less secure way of using the registry.

docker login 10.40.14.68 -u vmwadmin@corp.vmw
Password:
WARNING! Your password will be stored unencrypted in /home/ubuntu/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
 
Login Succeeded

You can see from the output of docker login that the credentials are stored unencrypted and a credential helper is recommended (good thing we have one).

If you inspect the noted /home/ubuntu/.docker/config.json file, you can easily get the credentials used to login to Harbor.

cat /home/ubuntu/.docker/config.json
{
        "auths": {
                "10.40.14.68": {
                        "auth": "dm13YWRtaW5AY29ycC52bXc6Vk13YXJlMSE="
                }
        }
echo -n "dm13YWRtaW5AY29ycC52bXc6Vk13YXJlMSE=" | base64 -d
vmwadmin@corp.vmw:VMware1!

One last thing to point out is that every time you create a new namespace under Workload Management, an associated project gets created in Harbor.

And conversely, when a namespace is deleted, the associated Harbor project is also deleted.

Leave a Comment

Your email address will not be published. Required fields are marked *