TGKI 1.9 with Windows workers

Windows workloads on Kubernetes have been supported as a beta feature in Tanzu Kubernetes Grid Integrated Edition for some time but with the release of TKGI 1.9, it’s finally a GA feature. It is only supported on vSphere with NSX-T networking though so keep that in mind before heading down this road. I’ll walk you through some of the process of getting this kind of deployment up and running.

Configure NSX-T Resources

I’m starting out with a vSphere 7.0 environment with NSX-T 3.0.2 deployed already. I have an edge cluster and a host cluster already configured in NSX-T so there is only a little bit of prep work to do there.

Bear in mind that this is a very simplistic configuration and you may want to diverge from it significantly in a production environment. This is fairly minimal but good for a proof-of-concept type of installation, or a simple lab setup.

In the NSX-T Manager, navigate to System, Settings, User Interface Settings.

Click the Edit link and set Toggle Visibility to Visible to All Users and set Default Mode to Manager. We need to do this as TKGI still only uses the Manager API so you have to be sure to create any objects that it will use in that interface.

Click the Save button.

Navigate to Networking, IP Management, IP Address Pools, IP Pools. Click the Add button. Create an IP pool that will be used for VIP/Load Balanced addresses.

Click the Add button.

Navigate to Networking, IP Management, IP Address Pools, IP Blocks. Click the Add button. Designate a subnet of IP addresses to be used for Kubernetes nodes.

Click the Add button.

On the IP Blocks page, click the Add button again to create another IP Block. Specify a subnet to be used for Kubernetes pods.

Click the Add button.

Navigate to Networking, Connectivity, Logical Switches, Switches and click the Add button. Create a new logical switch that will function as the uplink for the Tier-0 Router (I already have a VLAN-backed transport zone created named vlan-tz, which uses VLAN 210).

Click the Add button.

On the Switches page, click the Add button to create another Logical Switch. Create a new logical switch where the TKGI control plane components will reside (I already have an overlay transport zone named overlay-tz created).

Click the Add button.

Navigate to Networking, Connectivity, Tier-0 Logical Routers and click the Add button. Create a new Tier-0 logical router.

Click the Add button.

Select the t0-tkgi Tier-0 Logical Router. Navigate to Configuration, Router ports and click the Add button. Create a new router port to provide upstream connectivity.

Click the Add button under Subnets. Specify an IP address and mask for this interface. I’m working on the 192.168.210.0/24 network in this example.

Click the Add button.

While on the t0-tkgi page, navigate to Routing, BGP and click the Edit button. You can skip this and just create a static route if you don’t want to (or can’t) configure BGP in your environment.

Click the Save button.

Click the Add button under Neighbors, Users. Configure your BGP peer as appropriate.

Click on the Local Address tab.

Click the Add button.

While on the t0-tkgi page, navigate to Routing, Route Redistribution and click the Edit button.

Click the Save button.

Click the Add button and enter the appropriate types of routes to be advertised.

Click the Add button.

Navigate to the Networking, Connectivity, Tier-1 Logical Routers and click the Add button. Create a new Tier-1 logical router to be used by the TKGI management plane components.

Click the Add button.

Select the t1-tkgi-mgmt Logical Router. Navigate to Configuration, Router Ports and click the Add button. Create a new port to provide upstream connectivity.

Click the Add button under Subnets. Configure the router interface as appropriate.

Click the Add button.

On the t1-tkgi-mgmt page, navigate to Routing, Route Advertisement and click the Edit button.

Click the Save button.

Since this is a NAT’d deployment, several NAT rules need to be configured to provide connectivity into the components that will reside on the ls-tkgi-mgmt network. Navigate to Networking, Network Services, NAT. Ensure that the T0 Logical Switch is selected and then click the Add button. Configure a DNAT rule.

Click the Add button.

Repeat this for all other system that need a NAT rule. In my environment, this is 172.31.0.2-172.31.0.6 (mapping to 10.40.14.2-10.40.14.6). This will allow access to the BOSH, Opsman, TKGI DB, TKGI API and Harbor VMs.

An SNAT rule is also needed for outbound traffic from the 172.31.0.0/24 network.

Click the Add button.

With all of this done on the NSX-T side, you should see an NSX-T backed portgroup in the Networking view of the vSphere Client.

Now we can get to work on deploying the components needed for TKGI. We’ll start with VMware Tanzu Operations Manager, or Opsman for short.

Deploying Tanzu Operations Manager

You can download the latest Opsman ova from Pivnet. At the time of this writing, that version is 2.10.2. Deploying the ova to vSphere is fairly straightforward if you’ve ever deployed an ova before. I’ll be deploying to the ls-tkgi-mgmt portgroup and using the 172.31.0.3 IP address (which has a NAT rule translating it to 10.40.14.3 and a DNS record mapping opsman.corp.tanzu to 10.40.14.3).

The Public SSH Key value that you provide will be the only means of logging in to the Opsman VM if you need to so. Be sure save the private key value corresponding to this public key somewhere safe.

When the deployment is done, you can power on the Opsman VM and then after a few minutes you can access it via a web browser at its NAT’d IP address or FQDN if you specified a DNS record.

Click on the Internal Authentication button and then complete the information as appropriate.

Click the Setup Authentication button.

Login as the user that was just created.

If you’ve used Opsman before you’ll know that the orange bar at the bottom of the BOSH tile means that we need to configure it. Go ahead and click on the BOSH tile to get started.

On the vCenter Config page there are a number of items that have to be configured. Complete them as appropriate for your environment.

You’ll need to get a copy of the certificate in use by NSX Manager to paste in to the NSX CA Cert field.

Click the Save button.

Click on the Director Config link and complete the form as appropriate. You can see from the example below that I’m not changing very much…I’ve set the NTP server address, enabled the resurrector plugin, post-deploy scripts and the recreation of BOSH-deployed VMs.

Click the Save button.

Click on the Create Availability Zones tab. Click the Add button and create zones as appropriate for your environment.

Click the Save button.

Click on the Create Networks link and then click the Add Network button. Once again, create networks as appropriate. In my example, I’m reserving the IP addresses used by the gateway, the Opsman VM and a Linux VM that I have deployed.

Click the Save button.

Click on the Assign AZs and Networks link and configure the settings as appropriate.

Click the Save button.

The rest of the settings can be configured per your preferences. I usually configure the Include Tanzu Ops Manager Root CA in Trusted Certs setting on the Security link and reduce the size of the VMs on the Resource Config link.

We need to do a couple of things before deploying the TKGI tile to Opsman. The first will be to create a Windows stemcell and the second will be to configure a principal identity in NSX-T.

Create a Windows stemcell

You can’t download a Windows stemcell from Pivnet but you can download the stembuild utility and create your own Windows stemcell. You’ll also need the Local Group Policy Object Utility from Microsoft and an ISO for a Windows Server 2019 Server Core installation, build 17763. You can find complete documentation for this process at Creating a Windows Stemcell for vSphere Using Stembuild but I’ll just do a quick walkthrough here.

You’ll want to upload your Windows ISO file to a datastore that will be accessible to a base VM we’ll be creating. With this in place, you can begin creating the VM that will become our stemcell.

Be sure to set the compatibility to ESXi 6.0 and later.

Click the Finish button and then power on the VM. Launch the Web Console or Remote Console for this VM and walk through the installation process. We’re only going to do a Standard installation and mostly click Next through the wizard.

This is the point where you’ll be waiting for a bit. When Windows is installed, you will be prompted to set a password for the Administrator user.

Now we need to install VMware Tools. Back in the vSphere Client, right-click the VM, hover over the Guest OS menu and then select Install VMware Tools. Click the Mount button on the confirmation dialogue screen. Back in the console, type d:\setup64.exe and then minimize the command prompt window. You can accept all of the default settings during the installation wizard.

The VM will reboot when the installation is finished. Log back in as the administrator user when it comes back up.

Run the sconfig utility to configure a static IP address.

Choose Option 4 in sconfig to return the the main menu and then choose Option 5 to configure updates.

Select Option 6 to start downloading and installing updates.

The system will reboot when the updates are done installing. Log in again and repeat this process until there are no further updates found to install.

Make sure that there is no CD-ROM device attached to the VM back in the vSphere Client. Shutdown the Guest OS on the VM and make a clone of it (to a new VM).

When the clone operation is done we’re ready to move on to actually building the stemcell. Make sure you have the LGPO.zip file downloaded earlier in the current directory. You’ll notice that I’m passing in the optional -vcenter-ca-certs flag just to be sure we have no issues with authentication later. If you choose to do this, you’ll need to download the certificate from your vCenter Server first.

stembuild construct -vm-ip '192.168.110.101' -vm-username administrator -vm-password 'VMware1!' -vcenter-url vcsa-01a.corp.tanzu -vcenter-username administrator@vsphere.local -vcenter-password 'VMware1!' -vm-inventory-path '/RegionA01/vm/windows-stemcell' -vcenter-ca-certs 'vcsa-01a.crt'

You’ll get pages of output from this and the system will reboot. The process as whole can take up to an hour to complete. The VM should be in a powered-off state when the stembuild command exits. When done, you can move on to the second step of the stembuild process…packaging the stemcell into a format that can be imported into Opsman.

stembuild package -vcenter-url vcsa-01a.corp.tanzu -vcenter-username administrator@vsphere.local -vcenter-password VMware1! -vm-inventory-path '/RegionA01/vm/windows-stemcell' -vcenter-ca-certs vcsa-01a.crt

Once this starts, you’ll see an Export OVF template process in VC. It might look like nothing’s happening since this task doesn’t progress but don’t worry….it’s exporting a large vmdk file in the background. Eventually, you should see a file similar to bosh-stemcell-2019.26-vsphere-esxi-windows.2019-go_agent.tgz in the current directory.

Create a Principal Identity in NSX-T

The Principal Identity is what TKGI will use to communicate with NSX-T. The authentication is certificate-based so we need to create one for this purpose.

From a system with openssl installed, run a command similar to the following to create a new certificate and private key:

openssl req -newkey rsa:4096 -extensions usr_cert -nodes -keyout tkgi-nsx-t-superuser.key -x509 -days 365 -out tkgi-nsx-t-superuser.crt -subj "/CN=tkgi-nsx-t-superuser" -sha256

Generating a RSA private key
.......................................++++
.............................................++++
writing new private key to 'tkgi-nsx-t-superuser.key'
-----

You should have tkgi-nsx-t-superuser.crt and tkgi-nsx-t-superuser.key files present in your current directory.

In the NSX-T Manager UI, navigate to System, Settings, Users and Roles. Click the Add dropdown menu and then select Principal Identity with Role. Fill out the information as appropriate and paste the contents of the tkgi-nsx-t-superuser.crt file in the Certificate PEM field. The Role should be Enterprise Admin and the Node Id value is arbitrary.

Click the Save button.

Configure the TKGI tile

Download the TKGI 1.9 tile from Pivnet. In Opsman, click on the Installation Dashboard link and then click on the Import a Product link.

Select the pivotal-container-service-1.9.0 tile file and wait for the import to finish.

Click on the + icon under Tanzu Kubernetes Grid Integrated Edition to add the tile.

Click on the TKGI tile to start the configuration process. You’ll be dropped on to the Assign AZs and Networks tile where you can configure the appropriate availability zones.

Click the Save button.

Click on the TKGI API link. Click on the Generate RSA Certificate (unless you already have a certificate) and enter an appropriate *.<domain suffix>.

Click the Generate button. Configure the API Hostname and Worker VM Max in Flight settings as appropriate.

Click the Save button.

Click on Plan 11 and then click the radio button next to Active. Windows-based plans have to be 11, 12, or 13. Complete the rest of the fields as appropriate.

Click the Save button.

You can configure any other plans you’d like to before moving on.

Click on the Kubernetes Cloud Provider link and then click the radio button next to vSphere. Enter the appropriate vSphere information.

Click the Save button.

Click on the Networking link and then click the radio button next to NSX-T. You’ll need to have the Principal Identity certificate and private key handy as well as the NSX-T Manager certificate. We’ll also need to collect a few items that will help TKGI identify the various NSX-T objects that it will make use of. From a Linux system with access to the NSX-T manager, run commands similar to the following and save the output:

curl -k -X GET -u 'admin:VMware1!VMware1!' https://nsxmanager.corp.tanzu/api/v1/pools/ip-blocks | egrep "\"id\"|display_name"

    "id" : "f21cccff-01ce-4628-a334-4427e4e01aed",
    "display_name" : "ip-block-nodes-deployments",
    "id" : "c4ed16db-c1ba-4229-a0ed-4234dd49cade",
    "display_name" : "ip-block-pods-deployments",
curl -k -X GET -u 'admin:VMware1!VMware1!' https://nsxmanager.corp.tanzu/api/v1/logical-routers?router_type=TIER0 | grep '"id"'

"id" : "a5577047-2a7d-4389-ac98-c049f903c05c",
curl -k -X GET -u 'admin:VMware1!VMware1!' https://nsxmanager.corp.tanzu/api/v1/pools/ip-pools |egrep "\"id\"|display_name"

"display_name" : "Edge-TEP-IP-Pool",
    "id" : "99ab92b8-37f8-48c0-b9b4-0f294588452f",
    "display_name" : "Host-TEP-IP-Pool",
    "id" : "1d8e8606-0efc-4ea2-aae0-dd5c83f68138",
    "display_name" : "SI_Destination_IP_Pool",
    "id" : "965d253c-ebab-4dfa-b7a5-ba663137c749",
    "display_name" : "SI_Service_Chain_ID_IP_Pool",
    "id" : "e3943313-53e2-4a41-95e1-01061597a0dd",
    "display_name" : "SI_Source_IP_Pool",
    "id" : "60e0195a-9034-405d-846a-0fc1ef2f178c",
    "display_name" : "ip-pool-vips",

Click the Save button.

You have to choose an option on the CEIP and Telemetry link but this is really your choice. All other items should be ready to go. I typically reduce the size of the VMs on the Resource Config tab as well.

Click on the Installation Dashboard link…you’ll see that the TKGI tile still has the orange banner at the bottom and now has a Missing Stemcell link on it.

Click on this link and proceed to upload the Windows stemcell that was created earlier.

Click on the Import Stemcell button. Specify the proper stemcell file and wait for the import to complete. Apply the stemcell.

Configure the Harbor tile

The last step is to configure the Harbor tile. You can download the latest tile from Pivnet (2.0.3 as of this writing) and add it in the same fashion as was done for the TKGI tile (click the Import A Product button on the Installation Dashboard). When the import is done, click the + icon under VMware Harbor Registry to add the tile.

Click on the Harbor tile. You should be placed on the Assign AZs and Networks page. Configure the AZs and Network as appropriate

Click the Save button.

Click on the General link. Set the Harbor hostname and a static IP address if desired.

Click the Save button.

Click on the Certificate link. Click on the Generate RSA Certificate lnk (or provide a certificate, key and CA cert if you already have them). Enter an appropriate name to include in the certificate.

Click the Generate button.

Click the Save button.

Click on the Credentials link and set the Harbor admin password.

Click the Save button.

As we saw with the BOSH and TKGI tiles, the rest of the items here are already configured but can be customized. When you’re ready to proceed, click on the Installation Dashboard link.

Click on the Review Pending Changes button. We can leave all items as they are and just click the Apply Changes button.

You’ll be able to follow the progress of the installation in the Opsman UI.

And when the installation is complete, you should be presented with the following screen:

You should also see several new VMs getting created in the vSphere Client. Anything starting with sc- is a stemcell while anything starting with vm- is one of the TKGI control plane VMs. All of the VMs should end up in the TKGI-MGMT Resource Pool since we’re only deploying control plane components right now.

These names aren’t terribly helpful. To quickly identify the VMs you can run the bosh vms command from the Opsman VM. You can ssh to the Opsman VM as the ubuntu user while using the private key corresponding to the public key that was specified during the Opsman deployment. You’ll need to navigate to the Credentials tab on the BOSH tile and investigate the Bosh Commandline Credentials item to get the BOSH command line credentials first.

export BOSH_CLIENT=ops_manager BOSH_CLIENT_SECRET=hTbiWwRq_bGacWsfKftwx6L42KEyVSkN BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate BOSH_ENVIRONMENT=172.31.0.2 bosh
bosh vms

Using environment '172.31.0.2' as client 'ops_manager'

Task 60
Task 59
Task 60 done

Task 59 done

Deployment 'harbor-container-registry-d5a7016c2de7aad01467'

Instance                                         Process State  AZ          IPs         VM CID                                   VM Type      Active  Stemcell
harbor-app/2bda48b0-31fe-453a-a790-45c571f8aee3  running        TKG-MGMT-1  172.31.0.6  vm-d93159ef-419b-4340-91b4-a370c5e22bc4  medium.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.84

1 vms

Deployment 'pivotal-container-service-1fba907c4acf6a0d774a'

Instance                                                        Process State  AZ          IPs         VM CID                                   VM Type      Active  Stemcell
pivotal-container-service/6e2d4de5-c923-4875-ab46-09154c33fc3f  running        TKG-MGMT-1  172.31.0.5  vm-81fcf9f8-d59b-4d4d-8d4a-dee4525cd4d2  medium.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.84
pks-db/f9295e85-b49e-4754-9ef5-cf880bf28d53                     running        TKG-MGMT-1  172.31.0.4  vm-5ac13356-73a4-42b6-b828-bc38f5ca61de  medium.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.84

2 vms

Succeeded

While you don’t need to create DNS records for any of the components, you’ll likely find it beneficial to create them for at least the TKGI API VM and the Harbor VM. In this example tkgi.corp.tanzu will map to 10.40.14.5 (the NAT address for 172.31.0.5) and harbor.corp.tanzu will map to 10.40.14.6 (the NAT address for 172.31.0.6).

While on the Opsman VM you can create a TKGI admin user. This is done with the uaac command but you’ll need to get the secret value first from the Pks Uaa Management Admin Client credential on the TKGI tile.

We need to use this secret with the uaac command to login and create a user and add it to the pks.cluster.admin group.

uaac target https://tkgi.corp.tanzu:8443 --ca-cert /var/tempest/workspaces/default/root_ca_certificate
uaac token client get admin -s pyz3urqWGncJCYcevLG_DJUKZcKlbIYL
uaac user add tkgiadmin --emails tkgiadmin@corp.tanzu (enter VMware1! for password)
uaac member add pks.clusters.admin tkgiadmin

Create a TKGI cluster and deploy a Windows workload

If you haven’t already done so, download the tkgi binary from Pivnet. With this cli in place, you’re now ready to create a Windows-based cluster using the plan we created while configuring the TKGI tile

tkgi login -a tkgi.corp.tanzu -u tkgiadmin -p VMware1! --skip-ssl-validation

API Endpoint: tkgi.corp.tanzu
User: tkgiadmin
Login successful.
tkgi create-cluster windows-cluster --external-hostname windows-cluster.corp.tanzu --plan windows-small

PKS Version:              1.9.0-build.32
Name:                     windows-cluster
K8s Version:              1.18.8
Plan Name:                windows-small
UUID:                     8e0be9e4-372b-4f62-8701-5e9a738b8a3b
Last Action:              CREATE
Last Action State:        in progress
Last Action Description:  Creating cluster
Kubernetes Master Host:   windows-cluster.corp.tanzu
Kubernetes Master Port:   8443
Worker Nodes:             1
Kubernetes Master IP(s):  In Progress
Network Profile Name:
Kubernetes Profile Name:
Compute Profile Name:
Tags:

Use 'pks cluster windows-cluster' to monitor the state of your cluster

You can check on the cluster creation process periodically by running a command similar to tkgi cluster windows-cluster (you’ll see the same output that was generated from the tkgi create-cluster command until it’s finished). Once again, you’ll see more VMs getting created (three in this case) in the vSphere Client. From the Opsman VM, you can also run the bosh tasks and then bosh task <##> to see the details of the creation task.

bosh tasks

Using environment '172.31.0.2' as client 'ops_manager'

ID  State       Started At                    Finished At  User                                            Deployment                                             Description        Result
61  processing  Wed Sep 30 11:58:55 UTC 2020  -            pivotal-container-service-1fba907c4acf6a0d774a  service-instance_8e0be9e4-372b-4f62-8701-5e9a738b8a3b  create deployment  -

1 tasks

Succeeded
bosh task 61

Using environment '172.31.0.2' as client 'ops_manager'

Task 61

Task 61 | 11:58:56 | Deprecation: Global 'properties' are deprecated. Please define 'properties' at the job level.
Task 61 | 11:58:59 | Preparing deployment: Preparing deployment
Task 61 | 11:59:16 | Preparing deployment: Preparing deployment (00:00:17)
Task 61 | 11:59:16 | Preparing deployment: Rendering templates (00:00:11)
Task 61 | 11:59:28 | Preparing package compilation: Finding packages to compile (00:00:00)
Task 61 | 11:59:28 | Compiling packages: golang-1-windows/e16f75f552803f12865ce4a3573ff1e0f5faa9a929f53e1a0692ecd4148391bd
Task 61 | 11:59:28 | Compiling packages: golang-1-windows/b5d2d2bc253c507593adcbafd5bb8830474e8478dc0516152f10bf666ae76368
Task 61 | 11:59:28 | Compiling packages: openvswitch-windows/f6d58dfbbe3d8908d2ff78bbe95bd7f01b8b9101535383b7b25056ddec72675e
Task 61 | 11:59:28 | Compiling packages: nsx-cni-common-windows/888f3680abd495ef3a0ce95ee94487e1fffb0e13de5800a34c0e803acdb78ad3
tkgi cluster windows-cluster

PKS Version:              1.9.0-build.32
Name:                     windows-cluster
K8s Version:              1.18.8
Plan Name:                windows-small
UUID:                     8e0be9e4-372b-4f62-8701-5e9a738b8a3b
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   windows-cluster.corp.tanzu
Kubernetes Master Port:   8443
Worker Nodes:             2
Kubernetes Master IP(s):  10.40.14.34
Network Profile Name:
Kubernetes Profile Name:
Compute Profile Name:
Tags:

When the cluster creation process is done you’ll want to create a DNS record for the cluster…windows.corp.tanzu in this example will map to 10.40.14.34, the Kubernetes Master IP address.

If you’re curious to see where that IP address came from, you can log back in to NSX-T Manager and navigate to Networking, Network Services, Load Balancing and you’ll see that there is a new Load Balancer there.

And if you click on the Virtual Servers tab, you can see the details of what is backing this new Load Balancer.

We can now run the tkgi get-credentials command to populate the kubeconfig file.

tkgi get-credentials windows-cluster

Fetching credentials for cluster windows-cluster.
Context set for cluster windows-cluster.

You can now switch between clusters by using:
$kubectl config use-context <cluster-name>

And we should now have access to the cluster via kubectl.

kubectl get nodes -o wide

NAME                                   STATUS   ROLES    AGE   VERSION            INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION       CONTAINER-RUNTIME
1e560b84-3db7-4775-a0b9-23ed930be406   Ready    <none>   16m   v1.18.8+vmware.1   172.15.0.3    172.15.0.3    Ubuntu 16.04.7 LTS             4.15.0-117-generic   docker://19.3.5
b291abfc-ea3b-4545-8709-c67466a8fb78   Ready    <none>   11m   v1.18.8            172.15.0.4    172.15.0.4    Windows Server 2019 Standard   10.0.17763.1457      docker://19.3.11

In TKGI, we will not see any control plane nodes so these are just the two workers. There is a Linux worker deployed alongside the Windows worker and this is expected and not configurable. The Linux worker is needed to provide some functionality that is not available on the Windows worker but you should not plan on running Linux-based workloads on this node…create another cluster using a Linux-based plan for that.

Now we can deploy a Windows-base workload to the TKGI cluster. I have a couple of simple yaml files that will deploy a web server on to the Windows worker.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
      app: win-webserver
  name: win-webserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: win-webserver
  template:
    metadata:
      labels:
        app: win-webserver
      name: win-webserver
    spec:
      containers:
      - name: windowswebserver
        image: stefanscherer/webserver-windows:0.4.0
        env:
        - name: PORT
          value: "80"
        ports:
        - name: http
          containerPort: 80
      nodeSelector:
        kubernetes.io/os: windows
      tolerations:
      - key: "windows"
        operator: "Equal"
        value: "2019"
        effect: "NoSchedule"
apiVersion: v1
kind: Service
metadata:
  name: win-webserver
  labels:
    app: win-webserver
spec:
  ports:
    # the port that this service should serve on
  - port: 80
    targetPort: 80
    nodePort: 30317
  selector:
    app: win-webserver
  type: LoadBalancer
kubectl apply -f windows/

deployment.apps/win-webserver created
service/win-webserver created
kubectl get po,svc --selector=app=win-webserver

NAME                                 READY   STATUS    RESTARTS   AGE
pod/win-webserver-77f65548b5-dp5zr   1/1     Running   0          2m59s

NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/win-webserver   LoadBalancer   10.100.200.172   10.40.14.43   80:30317/TCP   2m59s

And we can also check in NSX-T Manager to see that a new Virtual Server under the existing Load Balancer has been created to service this deployment.

Now that we know that the IP address of the LoadBalanacer service is 10.40.14.43, we can point a browser to it and see what’s there.

While this is not too exciting, it proves some basic functionality. You can now work on getting something much more substantial up and running.

2 thoughts on “TGKI 1.9 with Windows workers”

  1. Pingback: Upgrading a TKGI 1.11 Management Console installation to 1.12 – Little Stuff

  2. Pingback: Installing Tanzu Application Service on vSphere with NSX-T Networking – Little Stuff

Leave a Comment

Your email address will not be published. Required fields are marked *