See the effect of moving a TKG cluster from one TMC cluster group to another (and a brief look at TMC quota policies)

TMC recently added functionality for moving a cluster from one cluster group to another. Cluster groups are a logical construct in TMC that provide means of organizing Kubernetes clusters in a way that makes sense to you and then applying common policies to the clusters in the same group. You can read more about cluster groups at What is Tanzu Mission Control and about moving clusters between groups at Move a Cluster Between Cluster Groups.

For this example, I’m starting out with a TKG deployed on AWS, via TMC. You can read more about this process in my previous blog post, How to deploy a TKG cluster on AWS using Tanzu Mission Control.

This cluster is deployed to the default cluster group and no manual changes have been made to the cluster outside of TMC. I have attempted to deploy a WordPress application to this cluster but am having some trouble getting it working.

kubectl --kubeconfig=kubeconfig-clittle-test-cluster.yml apply -k wordpress-aws/
storageclass.storage.k8s.io/k8s-policy created
secret/mysql-pass-bd45fkk6kd created
service/wordpress-mysql created
service/wordpress created
deployment.apps/wordpress-mysql created
deployment.apps/wordpress created
persistentvolumeclaim/mysql-pv-claim created
persistentvolumeclaim/wp-pv-claim created
kubectl --kubeconfig=kubeconfig-clittle-test-cluster.yml get deployments
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
wordpress         0/1     0            0           46s
wordpress-mysql   0/1     0            0           46s

Looking into this a little deeper shows that there is an issue with creating any containers due to a quota being applied.

kubectl --kubeconfig=kubeconfig-clittle-test-cluster.yml describe replicaset wordpress-5fb76c9f88
Name:           wordpress-5fb76c9f88
Namespace:      default
Selector:       app=wordpress,pod-template-hash=5fb76c9f88,tier=frontend
Labels:         app=wordpress
                pod-template-hash=5fb76c9f88
                tier=frontend
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 1
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/wordpress
Replicas:       0 current / 1 desired
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=wordpress
           pod-template-hash=5fb76c9f88
           tier=frontend
  Containers:
   wordpress:
    Image:      wordpress:4.8-apache
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:
      WORDPRESS_DB_HOST:      wordpress-mysql
      WORDPRESS_DB_PASSWORD:  <set to the key 'password' in secret 'mysql-pass-bd45fkk6kd'>  Optional: false
    Mounts:
      /var/www/html from wordpress-persistent-storage (rw)
  Volumes:
   wordpress-persistent-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  wp-pv-claim
    ReadOnly:   false
Conditions:
  Type             Status  Reason
  ----             ------  ------
  ReplicaFailure   True    FailedCreate
Events:
  Type     Reason        Age                  From                   Message
  ----     ------        ----                 ----                   -------
  Warning  FailedCreate  2m16s                replicaset-controller  Error creating: pods "wordpress-5fb76c9f88-fqrrs" is forbidden: failed quota: tmc.cgp.small-policy: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  2m16s                replicaset-controller  Error creating: pods "wordpress-5fb76c9f88-sldxh" is forbidden: failed quota: tmc.cgp.small-policy: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  2m16s                replicaset-controller  Error creating: pods "wordpress-5fb76c9f88-sfbhn" is forbidden: failed quota: tmc.cgp.small-policy: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  2m16s                replicaset-controller  Error creating: pods "wordpress-5fb76c9f88-44gk9" is forbidden: failed quota: tmc.cgp.small-policy: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  2m16s                replicaset-controller  Error creating: pods "wordpress-5fb76c9f88-9bdxv" is forbidden: failed quota: tmc.cgp.small-policy: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  2m15s                replicaset-controller  Error creating: pods "wordpress-5fb76c9f88-k58lp" is forbidden: failed quota: tmc.cgp.small-policy: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  2m15s                replicaset-controller  Error creating: pods "wordpress-5fb76c9f88-jb6jt" is forbidden: failed quota: tmc.cgp.small-policy: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  2m15s                replicaset-controller  Error creating: pods "wordpress-5fb76c9f88-7sg2q" is forbidden: failed quota: tmc.cgp.small-policy: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  2m14s                replicaset-controller  Error creating: pods "wordpress-5fb76c9f88-mt6bh" is forbidden: failed quota: tmc.cgp.small-policy: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  54s (x6 over 2m13s)  replicaset-controller  (combined from similar events): Error creating: pods "wordpress-5fb76c9f88-5gzmr" is forbidden: failed quota: tmc.cgp.small-policy: must specify limits.cpu,limits.memory,requests.cpu,requests.memory

I haven’t specified any resource or limit values in my deployment yaml files so the quota in play will not allow the containers to be created.

Resource quotas are applied to clusters as polices in TMC, either on the cluster directly or inherited from the cluster group to which the cluster belongs. We can drill down into Policies > Quota > default and see that there is a quota applied on this cluster group.

You can click the Edit link to see the details on this policy.

This is obviously very small and not likely to allow much of anything to run, even if some resources and limits were specified in our deployment yaml files. This may have been configured to keep users from running their clusters in the default cluster group and push then into using specific, existing cluster groups or creating a new one. We can also see the effect of this quota on the default namespace in the TKG cluster.

kubectl --kubeconfig=kubeconfig-clittle-test-cluster.yml describe ns default
Name:         default
Labels:       <none>
Annotations:  <none>
Status:       Active

Resource Quotas
 Name:            tmc.cgp.small-policy
 Resource         Used  Hard
 --------         ---   ---
 limits.cpu       0     1
 limits.memory    0     1Mi
 requests.cpu     0     1
 requests.memory  0     1Mi

No LimitRange resource.

We’ll create a new cluster group with no quotas specified and then move our cluster into this group. This should allow the WordPress application to start up successfully.

In the TMC UI, navigate to Cluster groups and then click on the Create Cluster Group button. We only need to specify a name as the Description and Labels fields are both optional.

Click the Create button.

If we go to Policies > Quota > noquota, we can see that there are no quota policies applied to this new group.

The cluster can be moved into the new cluster group from the Cluster page. Click the thee vertical dots next to the cluster name and then select Move.

You only need to select the destination Cluster group and click the Move button.

And the warning about a change in inherited policies is exactly what we want to happen.

You can see that the cluster has been moved back on the Clusters page.

If we go back to the TKG cluster and describe the namespace again, we can see that the quota is no longer in effect.

kubectl --kubeconfig=kubeconfig-clittle-test-cluster.yml describe ns default
Name:         default
Labels:       <none>
Annotations:  <none>
Status:       Active

No resource quota.

No LimitRange resource.

Within a few minutes, the WordPress deployment should be coming up (you can delete the replicasets to speed things up).

kubectl --kubeconfig=kubeconfig-clittle-test-cluster.yml get deployment,replicaset,po,svc
NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/wordpress         1/1     1            1           46m
deployment.apps/wordpress-mysql   1/1     1            1           46m

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/wordpress-5fb76c9f88         1         1         1       46m
replicaset.apps/wordpress-mysql-5fcd84f896   1         1         1       13m

NAME                                   READY   STATUS    RESTARTS   AGE
pod/wordpress-5fb76c9f88-z7dvs         1/1     Running   0          98s
pod/wordpress-mysql-5fcd84f896-hv8h5   1/1     Running   0          2m31s

NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP                                                              PORT(S)        AGE
service/kubernetes        ClusterIP      10.96.0.1       <none>                                                                   443/TCP        127m
service/wordpress         LoadBalancer   10.100.62.147   ae9b2211c3719406892a152ea438209e-319900228.us-east-1.elb.amazonaws.com   80:31974/TCP   46m
service/wordpress-mysql   ClusterIP      None            <none>                                                                   3306/TCP       46m

And for the final test, you can access the WordPress application via the external IP address noted for the wordpress service.

2 thoughts on “See the effect of moving a TKG cluster from one TMC cluster group to another (and a brief look at TMC quota policies)”

    1. Hi Jerry. I did a little bit of digging into this and it does not look like you can change the cluster group where a registered management cluster lives. This setting should only affect the default choice of cluster group for any workload clusters created or onboarded underneath this particular management cluster though. You can still choose a different cluster group for workload clusters than the one chosen for the management cluster. If being able to modify this setting is something you’d like to see in the product, I would recommend opening a support request with VMware and ask about it being added in a future version.

Leave a Comment

Your email address will not be published. Required fields are marked *