With regards to VMware’s Kubernetes offerings, providing load balancing services to your workloads has been a bit tricky. If you were using Tanzu Kubernetes Grid Integrated Edition or vSphere with Tanzu, and had NSX-T in the mix, you were covered. vSphere with Tanzu, 7.0 U1 release, provided support for vSphere networking and HAProxy for load balancing but you were on your own for support if anything went wrong with HAProxy. With vSphere 7.0 U2, there is now support for NSX Advanced Load Balancer (formerly Avi Vantage). The same is currently supported under the Tanzu Advanced license and will be more tightly integrated and widely supported in an upcoming version of TKG.
Before getting started, you would do well to read up on the process of installing NSX Advanced Load Balancer (NSX ALB) at https://avinetworks.com/docs/latest/installing-avi-vantage-for-vmware-vcenter/.
Download and Deploy the OVA
When you go to download NSX ALB, you get forwarded to a page under https://portal.avipulse.vmware.com/. From here, select the most current version, scroll down to VMware and download the controller OVA. Disclaimer: Only the 20.1.3 version is currently supported for use with TKG. I have used both 20.1.4 and 20.1.5 without issue but you might run into support concerns if you stray from 20.1.3.
The OVA deployment is fairly straightforward with only a few basics needed.
When customizing the template, you don’t have to supply an authentication key but it will make for more secure access via SSH if that’s something that concerns you.
The deployment should only take a few minutes and you can follow the process in the Recent Tasks pane.
When the deployment is finished, power on the VM. It will go through another reboot before it’s ready as it has to configure settings which require a reboot. You can navigate to http://<NSX ALB IP> and see a message stating that the controller is not ready yet:

When the web server is fully up, you will be redirected to the secure (https) version of the UI. You might see a page similar to the following if the setup is taking longer than expected (only a few minutes most of the time though):

And finally the login page where you get to create a super user:

Enter DNS information and passphrase for any backups that are created:

I choose to delete all of the pre-configured NTP servers and only use one internal to my lab.


You can choose what type of SMTP configuration you’d like to use or “None” as I have done.

Choose VMware as the Orchestrator.

Provide administrator credentials and the vCenter Server FQDN or IP address. Leave SDN Integration and Permissions alone.

Choose an appropriate Datacenter from your vCenter Server inventory. I choose to configure a static pool of addresses for any Service Engines (SEs) that get created but you could use DHCP if that is your preference.

Choose the appropriate portgroup for management traffic and if you have chosen to statically assign IP addresses to SEs as I have, configure the IP Subnet, Pool and Gateway.

Select No for configuring multiple Tentants.

And you’re ready to get into the NSX ALB configuration in earnest now.

Configure the Cluster
You can navigate around the NSX ALB UI via the hamburger icon at the top and choose from the various functions.

I’m starting out with configuring a cluster. While I am only going to be running a single-node controller cluster, I want to have the option to expand it if needed and don’t want to worry about reconfiguring anything that has its hooks into NSX ALB via the first controller’s FQDN or IP address.
Navigate to the Administration page and then to Controller. Click the Edit button on the Nodes page.
Here you can supply a unique name and IP address for the cluster.

Apply a Valid License (optional)
If you are planning on using NSX ALB Essentials, you don’t need to do this. Since I have an Enterprise license, I’m going to use it.
From the Administration page, navigate to Settings and then to Licensing. Click the Apply Key button.
Enter your license key and click the Apply Key button.

Configure a Certificate
A valid certificate is required for vSphere with Tanzu or TKG to work with NSX ALB. Luckily, the process of replacing the default certificate is fairly easy.
From the Administration page, navigate to Settings and then to Access Settings. Click the Edit button.
You’ll need to delete the two certificates that are in the SSL/TLS Certificate field.
With both certificates deleted, you can click in the SSL/TLS Certificate field to get access to the Create Certificate button.
As you can see in the following screenshot, there are a few options for replacing the certificate. For our purposes, we could choose to create a CSR and then get a valid certificate or to import an existing valid certificate.

I have a wildcard certificate that I use for many applications and choose to use it for this one as well via the Import option.

After you click the Validate button, you should see output similar to the following:


After you click the Save button, you’ll be taken back to the Update System Access Settings page where you can see that your new certificate is present in the SSL/TLS Certificate field. Click the Save button here to move on.
At this I can access the NSX ALB UI via an FQDN mapped to the cluster IP address and not get a certificate warning. If you still see the original certificate, you might need to wait another minute or two.
Configure Authentication
This is an optional step but I like to configure AD authentication for just about every component that allows it.
From the Administration page, navigate to Settings and then to Authentication/Authorization. Click on the Edit button.
Click the Remote button and then click in the Auth Profile field to get access to the Create button.

Complete the Auth Profile page as appropriate. Since I’m configuring this for Active Directory, you can use this as guide if you’re doing the same. You might recognize many of these same settings as they are almost identical to the configuration used in my post, How to Deploy an OIDC-Enabled Cluster on vSphere in TKG 1.2.
After you click Save, you’ll be taken back to the first page where you clicked Create. You should see your Auth Profile present now. You can leave the Allow local user login toggle selected as a backup means of getting in should anything go wrong with your AD integration.

On the Authentication/Authorization page, click the New Mapping button to grant access to a user or group of users.
In this example, I’m granting Super User access to members of the tanzuadmins AD group.

I can now log in with any member of the tanzuadmins AD group.

After logging in with an AD user, you’ll see that user present on the Administration, Accounts, Users page.
Configure Networking
The next step is get the networking properly configured so that the SEs will be able to allocate virtual IP addresses (VIPs) to load balancer endpoints.
From the Infrastructure page, navigate to Networks. You should see a list of the portgroups present within vCenter Server. I’m going to use the K8s-Frontend network for my VIPs so I’m clicking the Edit button for that item.
On the Edit page, enter an appropriate IP subnet in CIDR format in the IP Subnet field. Click the Add Static IP Address Pool button if you’re not using DHCP, and provide a usable range of IP addresses from the subnet.
The Edit page should look similar to the following now:
And after you click the Save button and are back on the Networks page, you can see the configured IP subnet as well as how many IP addresses are available.
I need to make one change to the K8s-Workload network so that it will be functional for my TKG installation. Any Service Engines that get created for a TKG cluster will need to have an address on the K8s-Workload network to function properly. Since I have a DHCP server on this network I’m going to click the Edit button for K8s-Workload and then check the box next to DHCP Enabled.
We need to create a route for the the subnet that was created in the previous step.
From the Infrastructure page, navigate to Routing and then to Static Route. Click the Create button.
Enter the same IP subnet used previously and the gateway address for that subnet.
For a vSphere with Tanzu deployment, since the workload network (192.168.130.0/24) is on a different subnet from the VIP network (192.168.220.0/23), a static route is needed from the workload network to the gateway for the frontend network. If you have a flat network, or are deploying to TKG, this is unneeded.

The Static Route page should now show any routes you have configured.
Configure IPAM and DNS Profiles
The IPAM and DNS profiles are configured so that the SEs will know what VIP pool and DNS configuration to use.
From the Templates page, navigate to Profiles and then to IPAM/DNS Profiles. Click the Create dropdown and the choose IPAM Profile.
Give the profile an appropriate name. Check the box next to Allocate IP in VRF. Click the Add Usable Network link and select Default Cloud for Cloud for Usable Network and the network that was configured earlier (K8s-Frontend in this example, plus K8s-Workload so my TKG clusters will function) for the Usable Network.

From the Templates page, navigate to Profiles and then to IPAM/DNS Profiles. Click the Create dropdown and the choose DNS Profile.
Give the profile an appropriate name and click the Add DNS Service Domain link. Enter your domain suffix (corp.tanzu in this example) in the Domain Name field.

You should see both profiles now on the IPAM/DNS Profiles page.
With the profiles created, we now need to modify the Default-Cloud cloud to use them.
From the Infrastructure page, navigate to Clouds and lick the Edit button next to Default-Cloud.
Choose the IPAM and DNS profiles that were just created in the IPAM Profile and DNS Profile fields.
Configure the Service Engine Group
The Service Engine Group configuration will determine how the SEs are deployed.
From the Infrastructure page, navigate to Service Engine Group and click the Edit button next to Default-Group.
If you are using an Essentials license, you won’t be able to change the HA mode and will be locked in to a maximum of two Service Engines. You can increase the resources for each and the number of Virtual Services per SE however. Since I have an Enterprise license, I’m going to use the N + M (buffer) HA mode and use a maximum of 10 Service Engines. I’m also going to allow for 20 Virtual Services per SE and remove the memory reservation (since this is a resource constrained environment).
Lastly, I’m updating the prefix that gets used when naming the SE VMs so it’s a little more descriptive. Click on the Advanced tab to make this change (I changed the prefix to AviTanzu).
Configure Backups
This is another optional step but a fairly critical one if this is a production environment.
From the Administration page, navigate to System and then to Configuration Backup. Click on the Edit button.
Check the box next to Remote Server. Click in the User Credentials box to get access to the Create SSH User button.

Configure the SSH user per your needs. I choose to do a simple password-based authentication for a root user (never do this!!).
Complete the rest of the fields as appropriate for your backup destination.

After a backup has run you’ll see it listed at the bottom of the Configuration Backup page.
As you might have gleaned from many of these screenshots, there are numerous other items that ca be configured within NSX Advanced Load Balancer. I highly recommend reading up all of the features/functionality at https://www.vmware.com/products/nsx-advanced-load-balancer.html.
I’ll have a couple more posts soon where I dive into using NSX ALB with vSphere with Tanzu and Tanzu Kubernetes Grid.
Pingback: Deploying vSphere 7.0 U2 with Tanzu while using NSX Advanced Load Balancer – Little Stuff
Pingback: Installing Tanzu Kubernetes Grid 1.3 on vSphere with NSX Advanced Load Balancer – Little Stuff
Pingback: Revisiting installing NSX Advanced Load Balancer for use with Tanzu Kubernetes Grid and vSphere with Tanzu – Little Stuff
Pingback: Tanzu Kubernetes Grid 1.4 Installation in Internet-Restricted Environment – vStellar