How to deploy a TKG cluster on AWS using Tanzu Mission Control

I hadn’t used TMC in a few months so decided to revisit the process for creating a TKG cluster on AWS. Just like months ago, I’m still impressed at how all of this works and you can end up with an OCI conformant Kubernetes cluster built with ClusterAPI on AWS.

Note: These instructions assume you already have access to AWS and to TMC, and that the MyVMware account you’re using to access TMC meets certain prerequisites as noted at Getting Started with VMware Tanzu Mission Control.

Create an IAM User on AWS

You will need to have an IAM account/password on AWS that you use for creating resources needed by TMC. If you already have an IAM account with administrative access that you’d like to use you can skip this first part.

When you are logged in to AWS, you will find the IAM section under Security, Identity & Compliance:

Once you’re on the IAM page, you can click on the Add user button to create a new IAM user. Enter a meaningful name and set the Access type to Programmatic access.

Click the Next: Permissions button.

You’ll need to assign permissions to your new user via group membership or by attaching policies directly. You might already have a group set up which you could use for this step. I’m choosing to grant my new user the AdministratorAccess policy.

Click Next: Tags and then click Next: Review. Validate that the account will be created the way you want and then click Create user.

On the last page, it’s very important that you click the Show link under Secret access key as it is the only time you will be able to retrieve this value. Save it somewhere secure.

Click the Close button.

To allow this user to login to the AWS console you will need to configure its credentials. Select your new user and then click on the Security credentials tab. Click the Manage link next to Console password.

Set Console access to Enable. You can either use an auto-generated password or specify your own. Click the Apply button.

Navigate to the main AWS page and then to EC2 > Network & Security > Key Pairs. Click the Create key pair button at the top right. Note: You can reuse an existing key pair if you have one and can skip this step.

Specify a name for the new key pair. The choice of file format is dependent on how you might want to access the bastion host.

Connect your AWS account in TMC

You’ll start out by logging into VMware Cloud Services with your MyVMware credentials. Once logged in, you should see a listing of My Services and VMware Tanzu Mission Control should be listed there.

Click on the TMC icon.

The first thing to do in TMC is to connect your AWS account so that TMC will be able to provision TKG clusters. Navigate to Automation and then click on the Create Account Credential dropdown. Choose the AWS cluster lifecycle management credential option.

Enter a name for your new credential and then click the Generate Template button. A file with a .template extension will be downloaded.

Click the Next button.

The template that was created will be used to create a CloudFormation stack. Head back to the main AWS page and navigate to CoudFormation under Management & Governance. Click the Create stack dropdown at the top right and then choose With new resources (standard).

Under Specify template, select Upload a template file. Click the Choose file button and then select the .template file that was created in TMC earlier. 

Click the Next button.

Enter a name for the new stack and click the Next button.

Click the Next button on the Configure stack options page as there is nothing we need to change here.

Ensure everything is correct on the Review page. Click the checkbox at the bottom to acknowledge the IAM resources message and then click the Create stack button.

You should see that the creation is in progress.

You can click the refresh button and see various tasks running and completing.

Click on the Stack info link. When the Status is CREATE_COMPLETE, the stack is ready.

Click on the Outputs link and copy the ARN value noted.

You can now head back to TMC and click the Next button in the AWS Configuration section.

Paste the Stack ID value you just copied into the ARN field on the AWS role ARN section and then click the Create Credential button.

You should now see your credential listed on the Administration > Accounts page with a status of Valid account credentials.

Provision your cluster

In the TMC UI, click on the the Clusters item on the left and then on the Create a Cluster button.

You’ll need to specify a number of parameters here to get the process kicked off:

  • AWS Account credentials: The credentials we just created.
  • Cluster name: Something descriptive for the cluster to be deployed.
  • Cluster group: The “default'” cluster group should already exist but you can create others (prior to creating a cluster) if needed.
  • Description and Labels are optional fields.

Click the Next button.

On the Configure page, complete the required fields as appropriate for your AWS instance. After you select the Region, it should auto-populate the SSH Key and Kubernetes version fields (Kubernetes version defaults to the highest available but you can change this). The VPC field can be left at the default of New VPC or you can re-use an existing VPC (see Requirements for Using an Existing VPC to Provision a Cluster for details). You can also change any of the CIDR values to suit your needs.

Click the Next button.

You can choose how you’d like to see your TKG cluster configured on the Select control plane page. For simple testing purposes, choosing Single Node and leaving the Instance type as m5.large should suffice. You can read more about the differences between the various instance types at Amazon EC2 Instance Types.

Click the Next button.

On the Edit and node pools page you can accept the defaults and end up with a single worker node of size m5.large. You can change the Worker instance type and Number of worker nodes values to better suit your needs if necessary. If you are going to make use of node pools you might want to choose a unique name for the node pool as the default is just “default-node-pool”. You could also create multiple node pools with different configurations.

Click the Create Cluster button.

You can keep an eye on the cluster creation process and should see it move through various stages.

You can also watch objects being created in AWS EC2. You should see the number of Running Instances, Volumes, Elastic IPs, Load Balancers and Security Groups going up.

You can drill down into each to see more details about what has been created.

Instances: (note that there is an extra instance, this is the bastion host)

Volumes:

Elastic IPs:

Load Balancers:

Security Groups:

Additionally, you should see a new VPC created (assuming you didn’t reuse an existing one).

Eventually, you will see your cluster created successfully in TMC and hopefully a lot of green on the board.

Access your cluster

TMC provides an easy way of getting to the cluster you just created on AWS. On the details page for your cluster, click the Actions dropdown and select Access this cluster.

Click the Download Kubeconfig File button to save the kubeconfig file locally.

If you haven’t already downloaded the tmc CLI (from the Automation page), you can click the click here link and then click the Download CLI (OS-specific) button.

Now you can run a command similar to the following:

kubectl --kubeconfig=.kube/kubeconfig-cjlittle-test-cluster.yml get ns

And you should see a prompt like the following:

If you don't have an API token, visit the VMware Cloud Services console, select your organization, and create an API token with the TMC service roles:
  https://console.cloud.vmware.com/csp/gateway/portal/#/user/tokens?orgLink=/csp/gateway/am/api/orgs/e6f8b4af-faa2-4b55-8403-97d2d6b19341
? API Token

Open a browser and navigate to the URL noted. If you have already created an API token on VMware Cloud Services, you should see it listed and can copy/paste it to continue. If you haven’t created one, you should see a page noting that there are no API tokens.

Click the Create a New API Token link. Enter a name and specify the amount of time you’d like the token to be valid in the Token TTL field. Set the scope to All Roles.

Click the Generate button.

You’ll be presented with a “Token Generated” button. Click the Copy, Download or Print buttons and then click the Continue button.

Paste the token value on the command line where you were prompted for the API Token. You should next be asked to validate the Login Context name.

? Login context name e6f8b4af-faa2-4b55-8403-97d2d6b19341

Press enter to continue. You should next see that you are logged in to TMC and the kubectl get ns command should complete.

√ Successfully created context e6f8b4af-faa2-4b55-8403-97d2d6b19341, to manage your contexts run `tmc system context -h`
NAME                STATUS   AGE
default             Active   56m
kube-node-lease     Active   56m
kube-public         Active   56m
kube-system         Active   56m
vmware-system-tmc   Active   55m

You might be thinking that this was a lot of work to get a TKG cluster up and running. Much of this was simply prepping things on AWS and all of that work can be taken advantage of when creating subsequent TKG clusters on AWS. The workflow for another cluster from this point would simply consist of clicking the Create a Cluster button in TMC, providing the cluster-specific information for your new cluster and letting the rest of the work magically happen on AWS.

Access your TKG nodes

If you need to access the TKG nodes directly, you’ll find that you have to go through the bastion host that was created to get to them. 

To access the bastion host, navigate to Running Instances in EC2 and then select the bastion host. You can identify it by it’s name ending in -bastion. With the bastion host selected, click on the Actions dropdown and then select Connect.

The command in the Example section is about all you need to get to the bastion host as long as you have a copy of the ssh key pair you created saved locally.

ssh -i "cjlittle-tmc.pem" ubuntu@ec2-35-173-187-24.compute-1.amazonaws.com
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-1047-aws x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

223 packages can be updated.
151 updates are security updates.


*** System restart required ***

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@ip-10-0-0-56:~$

From here you can ssh to the control plane and worker nodes on their internal IP addresses (with the same ssh key pair) as the ec2-user user.  The ssh key pair does not exist on the bastion host so you’ll have to create it manually. You can get the internal IP address of the nodes via kubectl get nodes -o wide or via their Description page under Running Instances in EC2.

kubectl --kubeconfig=.kube/kubeconfig-cjlittle-test-cluster.yml get nodes -o wide
NAME                         STATUS   ROLES    AGE   VERSION            INTERNAL-IP   EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
ip-10-0-1-231.ec2.internal   Ready    master   76m   v1.18.5+vmware.1   10.0.1.231    <none>        Amazon Linux 2   4.14.181-142.260.amzn2.x86_64   containerd://1.3.4
ip-10-0-1-62.ec2.internal    Ready    <none>   75m   v1.18.5+vmware.1   10.0.1.62     <none>        Amazon Linux 2   4.14.181-142.260.amzn2.x86_64   containerd://1.3.4
ssh -i "cjlittle-tmc.pem" ec2-user@10.0.1.231
The authenticity of host '10.0.1.231 (10.0.1.231)' can't be established.
ECDSA key fingerprint is SHA256:gEFshiIuPB3pJC8v2YzldRcA6iDPakNZ8P/TfUQKll4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.1.231' (ECDSA) to the list of known hosts.

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
5 package(s) needed for security, out of 9 available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-10-0-1-231 ~]$

Leave a Comment

Your email address will not be published.