A Walk-through of Upgrading Tanzu Kubernetes Grid Integrated Edition (Enterprise PKS) from 1.7 to 1.8

I recently had an opportunity to run though a massive upgrade effort of a Tanzu Kubernetes Grid Integrated Edition (TKGI, formerly Enterprise PKS) installation from 1.7 to 1.8. This also included a vSphere upgrade from 6.7 U3 to 7.0b and an NSX-T upgrade from 2.5.1 to 3.0.1. TKGI 1.8 was just released on June 30th and brings some great new features and compatibilities…check out the release notes for more information.

One important thing to keep in mind is that support for NSX-T 3.0 is considered beta for now…my upgrade is just a lab environment so I had few concerns there.

Note: If you are planning on a similar upgrade, or any upgrade for that matter, please ensure that you have a tested backup and recovery plan and a DR plan in place prior to starting.

A few documents that you’ll want to read prior to getting started:

NSX-T Data Center Upgrade Guide
vCenter Server Upgrade Guide
ESXi Upgrade Guide
Upgrading Enterprise PKS (NSX-T Networking)

Some assumptions about my environment if you want to use these steps  in your own:

  • My vCenter server name is vcsa-01a.corp.local
  • My Datacenter name is RegionA01
  • My cluster name is RegionA01-MGMT
  • The management portgroup on my vDS is DSwitch-Management
  • The management network in my environment is on the 192.168.110.0/24 subnet
  • The PKS management portgroup is named ls-pks-mgmt and is NSX-T backed
  • The PKS management network in my environment is on the 172.31.0.0/24 subnet
  • I’m using a single T0, shared T1 and a NAT’d topology in NSX-T

We’ll need to download a few components first. It should take about 20 minutes, give or take.

NSX-T 3.0.1 Upgrade MUB file: https://my.vmware.com/web/vmware/downloads/details?downloadGroup=NSX-T-301&productId=982&rPId=47207
NSX-T 3.0.1 Kernel Modules: https://my.vmware.com/web/vmware/downloads/details?downloadGroup=NSX-T-301&productId=982&rPId=47207
VCSA 7.0b ISO: https://my.vmware.com/web/vmware/downloads/details?downloadGroup=VC700B&productId=974&rPId=46902
ESXi 7.0b ISO: https://my.vmware.com/web/vmware/downloads/details?downloadGroup=ESXI700B&productId=974&rPId=46902
Opsman 2.9.5 OVA: https://network.pivotal.io/products/ops-manager/ (choose 2.9.5)
TKGI 1.8 Tile and CLI Executables: https://network.pivotal.io/products/pivotal-container-service/ (choose 1.8.0, download tile and cli executables)
Harbor 1.10.3 Tile: https://network.pivotal.io/products/harbor-container-registry/ (choose 1.10.3)

The current TKGI and Kubernetes versions can be seen via the pks clusters command:

pks clusters

PKS Version     Name        k8s Version  Plan Name  UUID                                  Status     Action
1.7.0-build.26  my-cluster  1.16.7       small      b709b0ec-cb41-466a-bdac-23188900aeda  succeeded  UPGRADE

We can also run bosh vms to validate that all components are in a running state prior to starting:

Deployment 'harbor-container-registry-1e2d6da6b320d7088975'

Instance                                         Process State  AZ          IPs         VM CID                                   VM Type     Active  Stemcell
harbor-app/3bbc3ffb-b8b4-4939-9480-948d5cab994c  running        PKS-MGMT-1  172.31.0.6  vm-9f4aff39-a58a-48d5-96e9-498f0ef7a21d  large.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/456.103

1 vms

Deployment 'pivotal-container-service-4c0e6230c6649b40ddfd'

Instance                                                        Process State  AZ          IPs         VM CID                                   VM Type      Active  Stemcell
pivotal-container-service/593f7ef6-0b24-46c5-bc0a-2c450723331e  running        PKS-MGMT-1  172.31.0.5  vm-c057ed99-c810-4179-b2a7-05be919e2a5b  medium.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.59
pks-db/7e0d139e-e645-4a56-874f-31b11f75915d                     running        PKS-MGMT-1  172.31.0.4  vm-cdf4b899-6ca4-4fb4-9f94-33ff7147fab8  large.disk   true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.59

2 vms

Deployment 'service-instance_b709b0ec-cb41-466a-bdac-23188900aeda'

Instance                                     Process State  AZ          IPs         VM CID                                   VM Type      Active  Stemcell
master/4230eca7-0af8-4762-b993-96068234b43f  running        PKS-COMP-1  172.15.0.2  vm-389c7c39-1eb6-4fe3-b5c8-0ec8b472b9a9  medium.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.59
worker/539295ff-161b-4f83-a54f-6ab4f8f2d049  running        PKS-COMP-1  172.15.0.4  vm-b966cb32-887e-4c64-9c5c-c9d6af60b5e1  medium.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.59
worker/765a5c83-70c4-4453-afbe-2bc589e58e2c  running        PKS-COMP-1  172.15.0.3  vm-51c7abd8-e1e0-4bd5-8a08-f8b3f498473a  medium.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.59

3 vms

Upgrade Opsman, bosh, TKGI and Harbor (2 hours 15 minutes)

Log in to Opsman, click the admin link at the top right and choose Settings
Select Export Installation Settings. The current configuration settings will be saved to your default download location as installation.zip.

Note: These next two steps (powering off and deleting the Opsman VM) are optional if you want to keep the original around as a backup and use a different IP address for a new Opsman deployment. I’m choosing to re-use my original IP address so will remove the original Opsman VM (I didn’t need to delete it as well but space is at a premium in my lab).

In the vSphere Client, right-click the opsman VM and choose Power > Power Off
Right-click the opsman VM again and choose Delete from Disk

Right-click the RegionA01-MGMT cluster and choose Deploy OVF Template (these settings are very specific to my lab so you’ll need to adjust them if you’re going through this on your own):

  • Local File: ops-manager-vsphere-2.9.5-build.144.ova
  • VM Name: opsman
  • Location: RegionA01
  • Compute Resource: PKS-MGMT Resource Pool
  • Storage: map-vol datastore
  • Network: ls-pks-mgmt
  • IP Address: 172.31.0.3
  • Netmask: 255.255.255.0
  • Default Gateway: 172.31.0.1
  • DNS: 192.168.110.10
  • NTP Servers: 192.168.100.1
  • SSH Key: <copy the contents of your ssh public key>
  • Custom Hostname: opsman

Right-click the new opsman VM and choose Power > Power On

Reload the Opsman UI in your browser. You should see a page similar to the following (notice the rebranding?):

Click the Import Existing Installation button, enter  the passphrase from your previous installation for the Decryption Passphrase and click the Choose File button and select the installation.zip that was created earlier. Click the Import button. It will take several minutes for the import to complete and Opsman to be functional.

Sign in at the Welcome! prompt using the username and password from your previous installation. You’ll see that the Bosh tile is at the new version (2.9.5) but PKS and Harbor still need to be upgraded:

In the Opsman UI, click the Import a Product button.
Select pivotal-container-service-1.8.0-build.12.pivotal
Click the + button under Tanzu Kubernetes Grid Integrated

You’ll see that the name and version of the PKS tile have now changed:

In the Opsman UI, click the Import a Product button.
Select harbor-container-registry-1.10.2-build.12.pivotal
Click the + button under VMware Harbor Registry

You’ll see that the version of the Harbor tile has now changed:

Click the Review Pending Changes button
If you want to upgrade your Kubernetes clusters to the latest version supported in TKGI, you’ll want to ensure that all three products are checked and that the Upgrade all clusters errand box is checked under Tanzu Kubernetes Grid Integrated. You could also wait to do this later on a cluster-by-cluster basis if you have a large number of them and want to stagger them.


Click the Apply Changes button. You’ll see a page similar to the following while the upgrades are happening:

If you’re following along from the vSphere side of things, you’ll see lots of activity there as well as new VMs are created and configured and old VMs are powered down and deleted:

I have an Ubuntu 20.04 VM named cli-vm that I use for most CLI work but also use PowerShell on a Windows jump VM as well. Use the following steps to get the new binaries copied to the appropriate locations:

Rename pks-windows-amd64-1.8.0-build.12.exe to pks.exeand copy to C:\Windows\System32
Rename pks-windows-amd64-1.8.0-build.12.exe to tkgi.exe and copy to C:\Windows\System32
Rename pks-linux-amd64-1.8.0-build.12 to pks and scp to /home/ubuntu on the cli-vm VM
Rename tkgi-linux-amd64-1.8.0-build.12 to tkgi and scp to /home/ubuntu on the cli-vm VM

On the cli-vm run the following to make the new binaries executable and copy them to /usr/local/bin/:

chmod +x /home/ubuntu/pks
chmod +x /home/ubuntu/tkgi 
sudo mv /home/ubuntu/pks /usr/local/bin 
sudo mv /home/ubuntu/tkgi /usr/local/bin/

When the upgrade is finished, you should be presented with a page similar to the following:

You should also be able to re-login using the new tkgi binary and validate that the TKGI and Kubernetes versions have been updated and that everything is running as expected:

tkgi login -a pks.corp.local -u pksadmin -p VMware1! --skip-ssl-validation

API Endpoint: pks.corp.local
User: pksadmin
Login successful.
tkgi clusters

TKGI Version    Name        k8s Version  Plan Name  UUID                                  Status     Action
1.8.0-build.13  my-cluster  1.17.5       small      b709b0ec-cb41-466a-bdac-23188900aeda  succeeded  UPGRADE
bosh vms

Deployment 'harbor-container-registry-1e2d6da6b320d7088975'

Instance                                         Process State  AZ          IPs         VM CID                                   VM Type     Active  Stemcell
harbor-app/3bbc3ffb-b8b4-4939-9480-948d5cab994c  running        PKS-MGMT-1  172.31.0.6  vm-9f4aff39-a58a-48d5-96e9-498f0ef7a21d  large.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/456.103

1 vms

Deployment 'pivotal-container-service-4c0e6230c6649b40ddfd'

Instance                                                        Process State  AZ          IPs         VM CID                                   VM Type      Active  Stemcell
pivotal-container-service/593f7ef6-0b24-46c5-bc0a-2c450723331e  running        PKS-MGMT-1  172.31.0.5  vm-1d66dbe8-0b9a-4adc-b9d4-2c8b0d08b4ee  medium.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.75
pks-db/7e0d139e-e645-4a56-874f-31b11f75915d                     running        PKS-MGMT-1  172.31.0.4  vm-4da098f7-44a2-44d9-a7b2-e16704a7727a  large.disk   true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.75

2 vms

Deployment 'service-instance_b709b0ec-cb41-466a-bdac-23188900aeda'

Instance                                     Process State  AZ          IPs         VM CID                                   VM Type      Active  Stemcell
master/4230eca7-0af8-4762-b993-96068234b43f  running        PKS-COMP-1  172.15.0.2  vm-0516716d-57b8-44f3-9249-62cd42a471ae  medium.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.75
worker/539295ff-161b-4f83-a54f-6ab4f8f2d049  running        PKS-COMP-1  172.15.0.4  vm-322538d5-09c8-4710-94fd-461020b19b80  medium.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.75
worker/765a5c83-70c4-4453-afbe-2bc589e58e2c  running        PKS-COMP-1  172.15.0.3  vm-4fe2be82-1679-4343-883b-e52033929d55  medium.disk  true    bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.75

Upgrade NSX-T (1 hour 45 minutes)

Due to upgrade requirements for NSX-T 3.0, we need to add an extra disk to the current manager VM. If you have more than one manager VM, this will need to be done for each of them.

In the vSphere UI, right-click the nsxmgr-01a VM and select Edit Settings
Click the Add New Device button and then select Hard Disk
Set the size of the new Hard Disk to 100GB and click the OK button

In the NSX-T UI, navigate to System > Lifecycle Management > Upgrade
Select Upload MUB file, click Browse, select VMware-NSX-upgrade-bundle-3.0.0.0.0.15946738.mub. Click the Upload button.


It’s very likely that your login will timeout while the upgrade bundle is uploading. If this happens, just log back in and go back to the System > Lifecycle Management > Upgrade page again.

Note: The screen will be blank for a while with a spinning circle, when the upgrade is ready to proceed the following is displayed:

When the upload is finished, click the Begin Upgrade button

You’ll get a warning about the upgrading the Upgrade Coordinator component where you can just click the Continue button. You’ll have a spinning circle for a while so be patient.

Click the Next button
Click the Run Pre Checks button, if everything comes back without issues, click the Next button (you will likely get a Warning about a backup not being taken, you can ignore this or take a backup)

There are a few options to consider on the Edges page:

I have a single edge deployed so most of these options won’t have an effect on the upgrade but consider what each one does and choose what’s best for your upgrade scenario.

Click the Start button to begin the Edge upgrade

Note: While the edge is upgrading, the edges are put into Maintenance Mode one at a time. If you only have one edge, anything that is connected to an NSX-T backed network will be inaccessible. In a production environment there should be multiple edges for redundancy and this would not be a concern.

You’ll see progress in the UI and can click the More link for details on what’s been done so far:

Click the Next button when the Edge upgrade is finished

Once again, you’ll see that there are several different options that you can set while doing the host upgrade piece so make sure you understand what each one is. I don’t have capacity to run the upgrades in parallel and am fine with the other options.

Click the Start button to begin the Host upgrade

Note: You will see progress in the upgrade UI and you will see each ESXi host go into maintenance mode in the vSphere Client during this phase of the upgrade. If anything prevents a host from entering maintenance mode it will cause the NSX-T upgrade to fail. You will need to get the affected host in to maintenance mode and then restart the NSX-T upgrade.

Click the Next button when the Host upgrade is finished

Click the Start button to begin the NSX-T Manager upgrade

Note: You will lose access to the NSX-T Manager UI for most of this portion of the upgrade. You may see messages similar to the following displayed:

Note: Clicking the Reload button on this page may not display the UI again. You can try refreshing the page after a few minutes (maybe up to 30 minutes). You will need to re-login once the UI finally refreshes.

When the UI is refreshed, you should see a page similar to the following:

Click the Manage Licenses button at the top of the page
Click the + Add button
Enter a valid 3.0 license and click the Add button
Select the NSX Data Center Evaluation license and click the Unassign link

We’ll need to enable the legacy Manger UI as TKGI still uses the Manager API (instead of the Policy API) for interacting with NSX-T. You’ll still want to use the Manager UI when working with items that will be used by TKGI.

Navigate to System > Settings > User Interface Settings
Click the Edit link


Set Toggle Visibility to Visible to All Users
Set Default Mode to Manager
Click the Save button

You should now see a toggle in the top right that defaults to Manager and the UI should look similar to the Advanced Networking and Security view in NSX-T 2.5.

Note: You can validate that TKGI can still communicate with NSX-T by running the NSX-T Validation errand from the cli-vm VM:

bosh -d pivotal-container-service-4c0e6230c6649b40ddfd run-errand pks-nsx-t-precheck

Note: You should see output similar to the following:

Task 267 | 02:44:49 | Preparing deployment: Preparing deployment (00:00:03) 
Task 267 | 02:44:52 | Running errand: pivotal-container-service/593f7ef6-0b24-46c5-bc0a-2c450723331e (0) (00:00:58) 
Task 267 | 02:45:50 | Fetching logs for pivotal-container-service/593f7ef6-0b24-46c5-bc0a-2c450723331e (0): Finding and packing log files (00:00:02) 
Task 267 Started Sat June 20 02:44:49 UTC 2020 
Task 267 Finished Sat June 20 02:45:52 UTC 2020 
Task 267 Duration 00:01:03 
Task 267 done

Upgrade vCenter Server (1 hour 15 minutes)

On the Windows jump VM, right-click the VMware-VCSA-all-7.0.0-16189094.iso file and choose Mount
Drill down into the vcsa-ui-installer\win32 folder and run installer.exe
Click on the Upgrade tile

You’ll click Next a few times before you to get the bulk of the upgrade process.

Connect to source appliance

  • Center Server Appliance: vcsa-01a.corp.local
  • SSO User name: administrator@vsphere.local
  • SSO Password: <your password>
  • Appliance (OS) root password: <your password>

ESXi host or vCenter Server that manages the source appliance section:

  • ESXi host or vCenter Server name: vcsa-01a.corp.local
  • User name: administrator@vsphere.local
  • Password: <your password>

Click Yes on the Certificate Warning page

vCenter Server deployment target

  • ESXi host or vCenter Server name: vcsa-01a.corp.local
  • User name: administrator@vsphere.local
  • Password: <your password>

Again, click Yes on the Certificate Warning page

Select folder

  • Choose RegionA01

Select compute resource

  • Select RegionA01-MGMT

Set up target vCenter Server VM

  • VM name: vcsa-01a-new
  • Set root password: <your password>
  • Confirm root password: <your password>

Select Deployment Size

  • Deployment size: Small
  • Storage Size: Default

Select Datastore

  • Choose an appropriate datastore

Configure network settings

  • Network: DSwitch-Management

Temporary network settings:

  • IP version: IPv4
  • IP assignment: static
  • Temporary IP Address: 192.168.110.23
  • Subnet mask or prefix length: 24
  • Default Gateway: 192.168.110.1
  • DNS servers: 192.168.110.10

You’ll now be able to review your settings and kick off the first stage of the upgrade:

While Stage 1 is running, the new VCSA VM will be deployed and some initial configuration will be completed. You’ll see some activity in the upgrade UI as it progresses through different stages:

When you’re ready to move on to Stage 2, you’ll see a screen like the following where you can click the Continue button:

During Stage 2, the original VCSA VM will be powered off after the configuration data is copied to the  new VCSA VM.

Again, you’ll click Next a few times before you to get the bulk of the upgrade process.

If you see a page similar to the following for several minutes at the very beginning of Stage 2, it’s perfectly normal. And you should see the message in the box with the spinning circle change to Pre-upgrade checks are in progress…

Eventually you should see a Pre-upgrade check result. As long as you only see warnings and are okay with them, you can proceed on to Stage 2

Stage 2
Select upgrade data

  • Choose Configuration and Inventory (only since it’s the smallest option, you could make a different choice if needed)

Configure CEIP

  • Leave box checked

Ready to complete

  • Check box for I have backed up the source vCenter Server and all the required data from the database

During this process you will eventually lose access to the original VCSA.

The upgrade UI will change as it moves through the different phases of Stage 2

Just before the end of Stage 2, you’ll see a message regarding some changes in 7.0. This is just informational and you can click the Close button to proceed to the end of the upgrade:

When the upgrade is done, you can re-login to the vSphere Client and delete the original VCSA VM and rename the new one to the old name if desired.

The last thing you’ll need to do is to add a new license since the 6.x VC license will no longer be valid and the vCenter Server will now be an evaluation license.

Navigate to Menu > Administration > Licensing > Licenses > Assets > vCenter Server Systems
Select vcsa-01a.corp.local and click the Assign License link
Select the New License tab
Enter a valid 7.0 license for the License key and VC for the License name
Click OK

Upgrade ESXi hosts (1 hour 10 minutes)

The first thing we need to do is to create an upgrade baseline group that will include the ESXi 7.0 installation media as well as the NSX-T kernel modules. If we don’t do both at the same time the hosts will stop functioning as NSX-T transport nodes until the kernel modules are applied.

In the vSphere Client, navigate to Menu > Lifecycle Manager > Imported ISOs
Click the Import ISO link
Select VMware-VMvisor-Installer-7.0.0-15843807.x86_64.iso
Click the Import button


When the import is finished, click the Actions dropdown menu and choose Import Updates
Click the Browse button and select nsx-lcp-3.0.0.0.0.15945993-esx70.zip

There will also be an Import updates task visible in the Recent Tasks pane that will take a few minute to complete. Once this task is done, select the Baselines tab
Select New > Baseline

  • Name: ESXi 7.0 Upgrade
  • Content: Upgrade
  • Select ISO: ESXi-7.0.0-15843807-standard
  • Click the Finish button

Select New > Baseline

  • Name: NSX-T 3.0 Kernel Modules
  • Content: Extension
  • Select Extensions: NSX LCP Bundle
  • Click the Finish button

Select New > Baseline Group

  • Name: ESXI Upgrade Bundle
  • Upgrade Baseline: ESXi 7.0 Upgrade
  • Patch Baselines: None
  • Extension Baselines: NSX-T 3.0 Kernel Modules
  • Click the Finish button

Navigate to Menu > Hosts and Clusters
Select the RegionA01-MGMT cluster
Click the Updates tab
Select Baselines
Scroll down to Attached Baselines and click the Attach link and then click Attach Baseline or Baseline Group
Select ESXI Upgrade Bundle, click the Attach button


Scroll up to section titled 3 of 3 Hosts are unknown and click the Check Compliance link


When the Scan Entity task is done, scroll back down to the Attached Baselines section. You should see that the ESXi Upgrade Bundle group now has a Status of Non-compliant.


Select ESXI Upgrade Bundle and click the Remediate link
Accept the EULA
Ensure that all three hosts are selected and click the Remediate button

You’ll see the hosts going into and out of maintenance mode and being rebooted while the upgrade is occurring.

Note: When the upgrade is done, there will be a warning on each host stating No coredump target has been configured. Host coredumps cannot be saved.
You can ignore this error, or run esxcli system settings advanced set -o /UserVars/SuppressCoredumpWarning -i 1 on each ESXi host to disable the warning, or configure a coredump location.

It’s a good idea to validate the node health in NSX-T as well. You can navigate to System > Fabric > Nodes > Host Transport Nodes (Managed by: vcsa-01a). You should see all nodes at the 7.0 version and in a healthy state.

As with vCenter Server and NSX-T, we’ll need to assign a new license since the 6.x licence is no longer valid and the hosts will all be using an evaluation license.

Navigate to Menu > Administration > Licensing > Licenses > Assets > Hosts
Select all three hosts and click the Assign License link
Select the New License tab
Enter a valid 7.0 license for the License key and ESXi Hosts for the License name
Click OK

Create a new cluster (20 minutes)

Once all components are upgraded you can test out some TKGI functionality.

Issue a command similar to the following on the cli-vm VM to create a new TKGI cluster:

tkgi create-cluster tkgi --external-hostname tkgi.corp.local --plan small

You can follow the progress by running bosh tasks to get the task ID and then bosh task <ID>

bosh task 449
Using environment '172.31.0.2' as client 'ops_manager'

Task 449

Task 449 | 19:39:37 | Deprecation: Global 'properties' are deprecated. Please define 'properties' at the job level.
Task 449 | 19:39:40 | Preparing deployment: Preparing deployment
Task 449 | 19:39:43 | Warning: DNS address not available for the link provider instance: pivotal-container-service/593f7ef6-0b24-46c5-bc0a-2c450723331e
Task 449 | 19:39:43 | Warning: DNS address not available for the link provider instance: pivotal-container-service/593f7ef6-0b24-46c5-bc0a-2c450723331e
Task 449 | 19:39:43 | Warning: DNS address not available for the link provider instance: pivotal-container-service/593f7ef6-0b24-46c5-bc0a-2c450723331e
Task 449 | 19:40:04 | Preparing deployment: Preparing deployment (00:00:24)
Task 449 | 19:40:04 | Preparing deployment: Rendering templates (00:00:13)
Task 449 | 19:40:17 | Preparing package compilation: Finding packages to compile (00:00:01)
Task 449 | 19:40:18 | Creating missing vms: master/9a6926e8-16c4-4aad-923b-50de0ab8b185 (0)
Task 449 | 19:40:18 | Creating missing vms: worker/cfb88cdc-37a8-4f96-962b-3b49db2a4b37 (0)
Task 449 | 19:40:18 | Creating missing vms: worker/b3a390f0-4fa3-40ca-a398-5b7e0d5112a4 (1) (00:02:57)
Task 449 | 19:43:18 | Creating missing vms: master/9a6926e8-16c4-4aad-923b-50de0ab8b185 (0) (00:03:00)
Task 449 | 19:43:31 | Creating missing vms: worker/cfb88cdc-37a8-4f96-962b-3b49db2a4b37 (0) (00:03:13)
Task 449 | 19:43:32 | Updating instance master: master/9a6926e8-16c4-4aad-923b-50de0ab8b185 (0) (canary) (00:08:42)
Task 449 | 19:52:14 | Updating instance worker: worker/cfb88cdc-37a8-4f96-962b-3b49db2a4b37 (0) (canary) (00:09:26)
Task 449 | 20:01:40 | Updating instance worker: worker/b3a390f0-4fa3-40ca-a398-5b7e0d5112a4 (1) (00:09:36)

Task 449 Started  Sun Jun 21 19:39:37 UTC 2020
Task 449 Finished Sun Jun 21 20:11:16 UTC 2020
Task 449 Duration 00:31:39
Task 449 done

Succeeded

When finished, run tkgi cluster tkgi. You will see output similar to the following:

TKGI Version:             1.8.0-build.13
Name:                     tkgi
K8s Version:              1.17.5
Plan Name:                small
UUID:                     4c4fba91-7d65-4bd5-bcb4-b5c9f4968da8
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   tkgi.corp.local
Kubernetes Master Port:   8443
Worker Nodes:             2
Kubernetes Master IP(s):  10.40.14.37
Network Profile Name:
Kubernetes Profile Name:
Tags:

To be able to access the new cluster via kubectl we’ll need to create  a DNS record mapping the Kubernetes Master IP address (10.40.14.37) to the external hostname, (tkgi.corp.local):

Click the Start menu on the Windows jump VM and then click on the DNS application

Expand Forward Lookup Zones and then right-click corp.local and select New Host (A or AAAA)

Name: tkgi
IP address: 10.40.14.37 (or the Kubernetes Master IP(s) value from the tkgi cluster tkgi output

On the cli-vm, run tkgi get-credentials tkgi

Run kubectl get nodes to validate that you can connect to the new cluster

NAME                                   STATUS   ROLES    AGE   VERSION
1ad44a45-7999-48eb-9dbf-21889290e7f3   Ready    <none>   20m   v1.17.5+vmware.1
480e5831-50a8-45fd-9ef8-c1f7b8006476   Ready    <none>   10m   v1.17.5+vmware.1

1 thought on “A Walk-through of Upgrading Tanzu Kubernetes Grid Integrated Edition (Enterprise PKS) from 1.7 to 1.8”

  1. Pingback: Upgrading a TKG 1.11 Management Console installation to 1.12 – Little Stuff

Leave a Comment

Your email address will not be published. Required fields are marked *