Installing Tanzu Application Service on vSphere with NSX-T Networking

I’ve stepped away from many of the Tanzu products for the last few months but was recently asked by one of my coworkers who supports Tanzu Observability if we could get an instance of Tanzu Application Service (TAS) running in a vSphere environment with NSX-T networking. I had never used this particular product before but a quick bit of research showed that it installed similarly to Tanzu Kubernetes Grid Integrated Edition (TGKI) so I felt it shouldn’t be too far outside of my comfort zone to get it up and running. Check out the results below.

If you’re not familiar with TAS, in a nutshell, it provides the ability to quickly and reliably spin up applications on the cloud of your choice. It was formerly called Pivotal Application Service (PAS) prior to VMware acquiring Pivotal and moving most of their products under the Tanzu portfolio. You can read more about TAS at https://tanzu.vmware.com/application-service.

vSphere

This installation was done in a nested vApp running under vCloud Director and started as a simple vSphere 7.0 U3 deployment with a few hundred GB of NFS storage. My first step was to create a few resource pools to house the various TAS components that would be deployed.

To make full use of TAS, there are four organizational categories to consider:

  • Infrastructure – bosh and Opsman
  • Deployment – TAS components
  • Services – marketplace and on-demand services
  • Isolation – isolated deployment workloads

With this in mind, I created four resources pools: TAS-Infrastructure, TAS-Deployment, TAS-Services, TAS-Isolation.

NSX-T

The basic NSX-T configuration is the same as noted in previous posts I’ve done related to TKGI (see https://little-stuff.com/2020/09/30/tgki-1-9-with-windows-workers/#Configure_NSX-T_Resources for a detailed walkthrough of configuring NSX-T objects).

The main things that should be in place prior to starting are a T0 logical router (t0-tas in my setup)

T1 logical routers for each of the four organizational categories noted earlier( T1-Router-TAS-Infrastructure, T1-Router-TAS-Deployment, T1-Router-TAS-Services, T1-Router-TAS-Isolation in my setup)

Defined subnets for each of the four T1 routers (172.31.0.0/24, 172.31.1.0/24, 172.31.2.0/24, 172.31.3.0/24 respectively in my setup)

Logical switches for each of four organizational categories noted earlier (ls-tas-infrastructure, ls-tas-deployment, ls-tas-services, ls-tas-isolation respectively in my setup) and an uplink logical switch (uplink-vlan-ls in my setup)

NAT rules for inbound communication to the bosh and opsman VM and outbound for anything on the infrastructure network (172.31.0.0/24)

IP Block to be used for container networking

External IP Pool used for NAT

A load balancer configuration to allow traffic to access the deployed applications. This was the main unique difference between the NSX-T setup for TAS vs the same setup for TKGI, and it was not trivial to get it configured properly. My deployment network is 172.31.1.0/24 and my external network is 10.40.14.0/24. I decided to use 10.40.14.65 as the external, load-balanced address for accessing resources on the deployment network.

Create Active Health Monitors (health checks) for use by the virtual server:

In the NSX-T Manager UI, navigate to Networking > Network Services > Load Balancing > Monitors > Active Health Monitors.

Create the health monitor for web load balancing:
Click +ADD.
Enter Monitor Properties:
Name: tas-web-monitor
Health Check Protocol: LbHttpMonitor
Monitoring Port: 8080
Click Next.
Enter Health Check Parameters:
HTTP Method: GET
HTTP Request URL: /health
HTTP Response Code: 200
Click Finish.

Create the health monitor for TCP load balancing:
Click +ADD.
Enter Monitor Properties:
Name: tas-tcp-monitor
Health Check Protocol: LbHttpMonitor
Monitoring Port: 80
Click Next.
Enter Health Check Parameters:
HTTP Method: GET
HTTP Request URL: /health
HTTP Response Code: 200
Click Finish.

Create the health monitor for SSH load balancing:
Click +ADD.
Enter Monitor Properties:
Name: tas-ssh-monitor
Health Check Protocol: LbTcpMonitor
Monitoring Port: 2222
Click Next, then click Finish.

Create server pools (collections of VMs which handle traffic) for use by the virtual server:

In the NSX-T Manager UI, navigate to Networking > Network Services > Load Balancing > Server Pools.

Create the server pool for web load balancing:
Click +ADD to add a new pool.
Enter General Properties:
Name: tas-web-pool
Click Next.
Enter SNAT Translation:
Translation Mode: IP List
IP Address: 172.31.1.65-172.31.1.128
Click Next.
Enter Pool Members:
Membership Type: Static
Click Next.
Enter Health Monitors:
Active Health Monitor: tas-web-monitor
Click Finish.

Create the server pool for TCP load balancing:
Click +ADD to add new pool.
Enter General Properties:
Name: tas-tcp-pool
Click Next.
Enter SNAT Translation:
Translation Mode: Transparent
Click Next.
Enter Pool Members:
Membership Type: Static
Click Next.
Enter Health Monitors:
Active Health Monitor: tas-tcp-monitor
Click Finish.

Create the server pool for SSH load balancing:
Click +ADD to add new pool.
Enter General Properties:
Name: tas-ssh-pool
Click Next.
Enter SNAT Translation:
Translation Mode: Transparent
Click Next.
Enter Pool Members:
Membership Type: Static
Click Next.
Enter Health Monitors:
Active Health Monitor: tas-ssh-monitor
Click Finish.

Create virtual servers:

In the NSX-T Manager UI, navigate to Networking > Network Services > Load Balancing > Virtual Servers.

Create the virtual servers which forward HTTP and HTTPS traffic to apps in the foundation:
Click +ADD.
Enter General Properties:
Name: tas-web-vs
Application Type: Layer 4 (TCP)
Application Profile: nsx-default-lb-fast-tcp-profile
Click Next.
Enter Virtual Server Identifiers:
IP Address: 10.40.14.65
Port: 80
Enter Server Pool and Rules:
Default Server Pool: tas-web-pool
Click Next several times, then click Finish.
Repeat but name the virtual server tas-https-vs and set the port to 443.

Create the virtual server which forwards traffic to apps with custom ports to the foundation:
Click +ADD to add a new virtual server.
Enter General Properties:
Name: tas-tcp-vs
Application Type: Layer 4 (TCP)
Application Profile: nsx-default-lb-fast-tcp-profile
Click Next.
Enter Virtual Server Identifiers:
IP Address: 10.40.14.65
Port: 1024-1123,5900
Click Next.
Enter Server Pool and Rules:
Default Server Pool: tas-tcp-pool
Click Next, then click Finish.

Create the virtual server which forwards SSH traffic to the foundation:
Click +ADD to add a new virtual server.
Enter General Properties:
Name: tas-ssh-vs
Application Type: Layer 4 (TCP)
Application Profile: nsx-default-lb-fast-tcp-profile
Click Next.
Enter Virtual Server Identifiers:
IP Address: 10.40.14.65
Port: 2222
Click Next.
Enter Server Pool and Rules:
Default Server Pool: tas-ssh-pool
Click Next, then click Finish.

Create the load balancer:

In the NSX-T Manager UI, navigate to Networking > Network Services > Load Balancing > Load Balancers.

Click +ADD.
Enter the fields:
Name: tas-lb
Load Balancer Size: small
Click OK.
Select tas-lb.
Click Actions > Attach to a Virtual Server, and then select tas-web-vs. Repeat this procedure for the Virtual Servers tas-tcp-vs and tas-ssh-vs.
Click Actions > Attach to a Logical Router, and then select T1-Router-TAS for VMs-Deployment.

DNS

There are some unique DNS requirements that need to be in place for the deployed applications to be accessible.

  • Create three sub-domains under your primary domain named tas-system, tas-apps and tas-ssh.
  • In each sub-domain, create a wildcard DNS record (*.tas-system.corp.vmw for example) that maps to the external IP address used when creating the load balancer in NSX-T (10.40.14.64 in my setup).

TAS Certificate

A wildcard certificate must be created for the TAS components and there are a few different addresses (Subject Alternative Names, or SANs) that need to be included. The following is an example of the configuration file I used to create a CSR for my Microsoft Certificate Authority to generate a valid certificate:

[ req ]
default_bits = 4096
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[ req_distinguished_name ]
countryName = US
stateOrProvinceName = California
localityName = Palo Alto
organizationName = VMware
organizationalUnitName = VMware
commonName = corp.vmw
[ v3_req ]
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = *corp.vmw
DNS.2 = *.tas-system.corp.vmw
DNS.3 = *.tas-app.corp.vmw
DNS.4 = *.tas-ssh.corp.vmw
DNS.5 = uaa.tas-system.corp.vmw
DNS.6 = login.tas-system.corp.vmw
DNS.7 = api.tas-system.corp.vmw

Be sure to save the certificate and key files as they will be used later.

Opsman

Opsman (Tanzu Operations Manager) and bosh are largely deployed the same as noted in my previous post, https://little-stuff.com/2020/09/30/tgki-1-9-with-windows-workers/#Deploying_Tanzu_Operations_Manager. The main difference between the TAS and TKGI configurations is that there were four availability zones (AZs) and four networks created for TAS, one for each of the organizational categories noted previously (infrastructure, deployment, services, isolation). Each AZ uses the corresponding resource pool in vSphere and each network uses the corresponding logical switch from NSX-T. The following shows the deployment AZ and network as an example:

Ensure that the network settings are appropriately configured for each network category. This is largely going to come down to setting the appropriate subnet information for each network. In this example, the 172.31.1.0/24 network is used but it would differ for the other network types (173.31.0.0/24 for the infrastructure network for example).

On the Assign AZs and Networks page, the Singleton AZ and Network are both set to TAS-Infrastructure:

If you’re satisfied with the bosh configuration, you can navigate to the Review Pending Changes page and click the Apply Changes button to deploy the bosh VM.

NSX-T Tile

TAS will be using the NSX-T Container Plugin so we’ll need to install and configure it in Opsman. You can download the NSX-T tile from PivNet.

Click the Import a Product button in Opsman and choose the NSX-T tile file. When the tile is imported, click the + button next to it on the left side of the Opsman UI to move it over to the installation dashboard.

Click on the NSX-T tile and configure the NSX Manager page as appropriate to your environment.

Click the Save button and move on to the NCP page.

The PAS Foundation Name value is arbitrary but the other items are dependent on how you have configured NSX-T.

Click the Add button next to IP Blocks of Container Networks. If you have already created an IP Block in NSX-T, you only need to enter it’s name here and can leave the CIDR field blank.

Click the Add button next to IP Pools used to provide External (NAT) IP Addresses to Org Networks. As with the last step, you only need to enter the name if it’s already created in NSX-T.

You can deploy the NSX-T tile now (Review Pending Changes > Apply Changes) or wait until after the TAS tile is configured and knock them both out at the same time.

TAS Tile

As with the NSX-T tile, you can download the TAS tile from PivNet and load it into Opsman. You’ll see that there are three different download options here:

Tanzu Application Service is the primary TAS tile, but if you are in a resource constrained environment (like I am), you may want to try the Small Footprint TAS option. It deploys fewer VMs as several of the VMs pull double duty in their functionality. You will also want to download the cf CLI as it will be used for deploying applications later.

Once the TAS tile is imported, click the + button next to it on the left side of the Opsman UI to move it over to the installation dashboard.

Click on the TAS tile. You should be dropped into the Assign AZs and Networks page where you can select the appropriate options from what was configured on the bosh tile (TAS-Deployment in my setup).

On the Domains page, enter the system and apps DNS sub-domains created earlier (tas-system.corp.vmw and tas-app.corp.vmw in my setup)

On the Networking page, click the Add button next to Certificates and private keys for the Gorouter and HAProxy. The name is arbitrary but the Certificate and private key data should be the TAS certificate created earlier.

Set the TLS termination point to Gorouter

Set HAProxy forwards all requests to the Gorouter over TLS to Disable

Lastly, set Container network interface plugin to External (since we’re using NSX-T)

On the App Security Groups page, type X in the box as acknowledgement

On the UAA page, paste the TAS certificate and key in the SAML service provider credentials section.

On the CredHub page, click the Add button next to Internal encryption provider keys and enter a Name (arbitrary) and key (20 characters long). Check the Primary box.

On the Internal MySQL page, enter an email address for notifications in the Email address field.

You can disable any errands you might not need on the Errands page but this is purely your preference based on how you’ll be using TAS.

On the Resource Config page, you can adjust the sizing of any of the jobs to better suit your needs. On the same page you will need to make a change to the Router and Control jobs. Expand the Router job and paste json content similar to the following in the Logical Load Balancer field.

{
  "server_pools": [
    {
      "name": "tas-web-pool",
      "port": "80"
    },
    {
      "name": "tas-https-pool",
      "port": "443"
    }
  ]
}

This json corresponds to the names of the load balancer server pools created in NSX-T (tas-web-pool and tas-https-pool).

Expand the Control job and paste json content similar to the following in the Logical Load Balancer field.

{
  "server_pools": [
    {
      "name": "tas-ssh-pool",
      "port": "2222"
    }
  ]
}

This json corresponds to the name of the load balancer server pools created in NSX-T (tas-ssh-pool).

At this point the TAS tile should be configured. You can navigate back to the Installation Dashboard.

Click on the Review Pending Changes button and then on the Apply Changes button to deploy the TAS and NSX-T tiles.

You can follow the progress in the Opsman UI or via the bosh task command. You will see a lot of output over a long period of time.

You’ll see several VMs get deployed in the vSphere Client under the TAS-Deployment resource pool

You can get a little bit of a better idea of what these VMs are via the bosh vms command.

bosh vms

Using environment '172.31.0.11' as client 'ops_manager'

Task 118. Done

Deployment 'cf-59b72eed9fa560870c04'

Instance                                        Process State  AZ              IPs          VM CID                                   VM Type      Active  Stemcell
blobstore/60d0f8b6-4d83-4fda-904b-6764a2544515  running        TAS-Deployment  172.31.1.12  vm-d32fee29-90e9-48ff-b181-e076b25a9934  medium       true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.251
compute/62181f19-5582-4900-9707-2984f0d940a3    running        TAS-Deployment  172.31.1.14  vm-d70b7061-24af-4208-8a51-7deb3ecc0c71  xlarge.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.251
control/b645ed45-1610-432f-b7db-fd81fa1a9480    running        TAS-Deployment  172.31.1.13  vm-f62c7d41-2aff-4117-8063-1bf6076ccb41  xlarge.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.251
database/058f8df4-ccfa-475d-9e55-3eed3144fe4e   running        TAS-Deployment  172.31.1.11  vm-b2c952d1-8200-4aa3-8cad-14b420d442f9  large.disk   true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.251
router/2a9890d6-a946-4680-9470-cda7c334b96c     running        TAS-Deployment  172.31.1.15  vm-4b954902-945a-413c-8be6-45916da52d72  micro.ram    true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.251

5 vms

Succeeded

In NSX-T you should see additional objects as well.

Two new T1 logical routers:

Two new logical switches:

Two new SNAT rules:

And when the process is done, you should be notified in the Opsman UI.

TAS UI

You can access the TAS UI in a web browser at login.<system sub-domain>.<domain name>. In my setup this is https://login.tas-system.corp.vmw.

To get the login credentials, head back to Opsman and click on the TAS tile. Navigate to Credentials and scroll down the UAA Job section. Click on the Link to Credential link next to the Admin Credentials item.

You should see a page similar to the following:

{"credential":{"type":"simple_credentials","value":{"identity":"admin","password":"B7AXCycBkAJaGmQ0iC1tqsf25mBRV-o9"}}}

Use the noted password (7AXCycBkAJaGmQ0iC1tqsf25mBRV-o9 in this example) along with the admin user name to login to the TAS UI.

If you see a page similar to the following, click on the Apps Manager icon to continue to the TAS UI.

You might have noticed that the URL changes to apps.tas-system.corp.vmw…this is where those wildcard DNS records come in handy.

Before deploying an app we need to create an org and a space as we don’t want to deploy to the default (system) org. Click the Create Org button.

Enter an org name and then click the Create Org button.

On the page for your new org, click the Create New Space button.

Enter a name for the space and click the Create button.

You can see from this page that there are no apps deployed yet.

Deploy an application to TAS

One of the components downloaded earlier was the cf CLI. Be sure to extract this on a system in your environment that will have access to TAS.

On the system where cf is extracted run a command similar to the following to login to TAS (the username and password are the same as were used to login to the TAS UI):

cf login -a api.tas-system.corp.vmw -u admin -p B7AXCycBkAJaGmQ0iC1tqsf25mBRV-o9

API endpoint: api.tas-system.corp.vmw


Authenticating...
OK

Select an org:
1. system
2. tas-org

Org (enter to skip): 2
Targeted org tas-org.

Targeted space tas-space.

API endpoint:   https://api.tas-system.corp.vmw
API version:    3.115.0
user:           admin
org:            tas-org
space:          tas-space

You can see in the output that I selected the tas-org org but you can skip this and select it manually in a separate step.

cf target -o tas-org -s tas-space

API endpoint:   https://api.tas-system.corp.vmw
API version:    3.115.0
user:           admin
org:            tas-org
space:          tas-space

If you’re not building your own app, there are several samples you can deploy that are available at https://github.com/cloudfoundry-samples/. I’m going to deploy the test-app app and start off with cloning it locally.

git clone https://github.com/cloudfoundry-samples/test-app.git

Cloning into 'test-app'...
remote: Enumerating objects: 305, done.
remote: Total 305 (delta 0), reused 0 (delta 0), pack-reused 305
Receiving objects: 100% (305/305), 85.88 KiB | 514.00 KiB/s, done.
Resolving deltas: 100% (116/116), done.

From here you will only need to cd into the newly created test-app directory and cf push the application.

cf push

Pushing app test-app to org tas-org / space tas-space as admin...
Applying manifest file /home/ubuntu/test-app/manifest.yml...

Updating with these attributes...
  ---
  applications:
+ - name: test-app
+   instances: 1
    memory: 256M
    random-route: true
Manifest applied
Packaging files to upload...
Uploading files...
 36.87 KiB / 36.87 KiB [====================================================================================] 100.00% 1s

Waiting for API to complete processing files...

Staging app and tracing logs...
   Downloading binary_buildpack...
   Downloading java_buildpack_offline...
   Downloading go_buildpack...
   Downloading nodejs_buildpack...
   Downloading python_buildpack...
   Downloaded nodejs_buildpack
   Downloading r_buildpack...
   Downloaded binary_buildpack (5.6M)
   Downloading staticfile_buildpack...
   Downloaded staticfile_buildpack
   Downloading php_buildpack...
   Downloading java_buildpack_offline failed
   Downloading ruby_buildpack...
   Cell 62181f19-5582-4900-9707-2984f0d940a3 destroying container for instance 005fa221-c39f-429e-ad9f-58d8569b4f83
   Cell 62181f19-5582-4900-9707-2984f0d940a3 successfully destroyed container for instance 005fa221-c39f-429e-ad9f-58d8569b4f83
   Downloaded ruby_buildpack
   Downloading dotnet_core_buildpack...
   Downloaded dotnet_core_buildpack (240.2M)
   Downloading nginx_buildpack...
   Downloaded php_buildpack (544.1M)
   Downloading binary_buildpack...
   Downloaded python_buildpack (571.7M)
   Downloaded go_buildpack (582.6M)
   Downloading staticfile_buildpack...
   Downloaded r_buildpack (948M)
   Downloading java_buildpack_offline...
   Downloading ruby_buildpack...
   Downloaded staticfile_buildpack
   Downloading nginx_buildpack...
   Downloaded binary_buildpack
   Downloading nodejs_buildpack...
   Downloaded nginx_buildpack (51.3M)
   Downloading go_buildpack...
   Downloaded ruby_buildpack
   Downloading r_buildpack...
   Downloaded nginx_buildpack
   Downloading python_buildpack...
   Downloaded nodejs_buildpack
   Downloading php_buildpack...
   Downloaded go_buildpack
   Downloading dotnet_core_buildpack...
   Downloaded dotnet_core_buildpack
   Downloaded python_buildpack
   Downloaded php_buildpack
   Downloaded r_buildpack
   Downloaded java_buildpack_offline (604.3M)
   Cell 62181f19-5582-4900-9707-2984f0d940a3 creating container for instance 005fa221-c39f-429e-ad9f-58d8569b4f83
   Cell 62181f19-5582-4900-9707-2984f0d940a3 successfully created container for instance 005fa221-c39f-429e-ad9f-58d8569b4f83
   Downloading app package...
   Downloaded app package (36.9K)
   -----> Go Buildpack version 1.9.46
   -----> Installing godep 80
   Copy [/tmp/buildpacks/842ad584ef83b1381d372507d31fe31d/dependencies/52a892f00e80ca4fdcf27d9828c7aba1/godep-v80-linux-x64-cflinuxfs3-b60ac947.tgz]
   -----> Installing glide 0.13.3
   Copy [/tmp/buildpacks/842ad584ef83b1381d372507d31fe31d/dependencies/f5e4affa54f8cf8e22cf524de5165a6e/glide-v0.13.3-linux-x64-cflinuxfs3-ef07acb5.tgz]
   -----> Installing dep 0.5.4
   Copy [/tmp/buildpacks/842ad584ef83b1381d372507d31fe31d/dependencies/f1900fcb2de60a12ea6743ecf05e14d2/dep-v0.5.4-linux-x64-cflinuxfs3-79b3ab9e.tgz]
   -----> Installing go 1.17.9
   Copy [/tmp/buildpacks/842ad584ef83b1381d372507d31fe31d/dependencies/c6a24d72065a75e1ba17d24093721ddd/go_1.17.9_linux_x64_cflinuxfs3_a620909b.tgz]
   **WARNING** Installing package '.' (default)
   -----> Running: go install -tags cloudfoundry -buildmode pie .
   Exit status 0
   Uploading droplet, build artifacts cache...
   Uploading droplet...
   Uploading build artifacts cache...
   Uploaded build artifacts cache (13.1M)
   Uploaded droplet (4.4M)
   Uploading complete
   Cell 62181f19-5582-4900-9707-2984f0d940a3 stopping instance 005fa221-c39f-429e-ad9f-58d8569b4f83
   Cell 62181f19-5582-4900-9707-2984f0d940a3 destroying container for instance 005fa221-c39f-429e-ad9f-58d8569b4f83

Waiting for app test-app to start...

Instances starting...
Instances starting...
Instances starting...
Instances starting...

name:              test-app
requested state:   started
routes:            test-app-quick-impala-tj.tas-app.corp.vmw
last uploaded:     Thu 30 Jun 12:27:56 PDT 2022
stack:             cflinuxfs3
buildpacks:
        name           version   detect output   buildpack name
        go_buildpack   1.9.46    go              go

type:            web
sidecars:
instances:       1/1
memory usage:    256M
start command:   test-app
     state     since                  cpu    memory   disk     details
#0   running   2022-06-30T19:28:13Z   0.0%   0 of 0   0 of 0

Near the end of the output of the cf push command is a URL (noted as routes: in the output). This is where your new app will be accessible (test-app-quick-impala-tj.tas-app.corp.vmw in this example).

Back in the TAS UI, you should see that there is an app deployed under tas-space.

If you drill down into tas-space, you’ll see more information about the deployed app and how to get to it.

You can click on the test-app link here to see even more details about the app, make changes and see event history.

You can click the View App link here to launch a new browser tab and see that that application is up and running.

One last note, you can see that the URL for the app starts out with the app name (test-app), followed by some arbitrary text (quick-impala), followed by the app subdomain (tas-app.corp.vmw).

3 thoughts on “Installing Tanzu Application Service on vSphere with NSX-T Networking”

  1. I love this guide, will you do a 1.5 update and how to user windows workers for windows based containers?

    1. Hello John. I am not currently planning on doing an update as TAS is not something I work with very regularly. I might have a post in the future covering the Wavefront / Tanzu Observability integration with TAS though.

  2. Pingback: Installing TAS Integration for Tanzu Observability on vSphere – Little Stuff

Leave a Comment

Your email address will not be published.