In my last post, I had briefly hinted at the need for LoadBalancer services in TKG on vSphere since the only other externally accessible option (NodePort) will run into issues after an upgrade as your node IP addresses will change. TKG on vSphere does not offer any solution that provides LoadBalancer functionality (yet) but one of the more popular and easy-to-configure options is MetalLB. I’m going to walk through the deployment of MetalLB into a TKG 1.1 cluster running on vSphere and I’m going to use a BGP configuration (vs. a Layer 2 configuration) since the environment where I run all of my labs (vCD) already has a Linux VM running that provides routing and BGP functionality.
If you want to follow these steps largely verbatim, the only prerequisites that you’ll need to fulfill are a TKG cluster running on vSphere (see my first post, Installing TKG 1.1 on vSphere, Step-By-Step, for instructions) and a remote system running BGP that you can peer with. You could also modify these steps fairly easily to create a Layer 2 configuration if that is your preference.
We’ll start out by creating a new namespace, deploying the MetalLB manifest and creating a random secret:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
The next step requires some knowledge of your remote system where BGP is configured. In my example, the Linux VM where I am running BGP is at IP address 192.168.100.1 and the ASN there is 65002. I want to provide a small range of IP addresses for LoadBalancer services so I have chosen 10.40.14.0/27 as my subnet. Lastly, I have chosen 65022 as my local ASN. Once you have this information ready, you can create a ConfigMap that MetalLB will use, per the following example:
kubectl apply -f - << EOF apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | peers: - peer-address: 192.168.100.1 peer-asn: 65002 my-asn: 65022 address-pools: - name: default protocol: bgp addresses: - 10.40.14.0/27 bgp-advertisements: - aggregation-length: 27 EOF
We’ll need to check the IP addresses assigned to the MetalLB “speaker” pods as they will need to be added to the remote system’s BGP configuration as peer addresses:
kubectl -n metallb-system get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES controller-5c9894b5cd-fnph8 1/1 Running 0 12m 100.125.233.126 tsm2-md-0-6cb84c4647-qvxf9 <none> <none> speaker-jq2xd 1/1 Running 0 12m 192.168.100.192 tsm2-control-plane-jf7f9 <none> <none> speaker-rn8f2 1/1 Running 0 12m 192.168.100.194 tsm2-md-0-6cb84c4647-qvxf9 <none> <none>
In this example, the IP addresses I’ll be working with are 192.168.100.192 and 192.168.100.194.
The next step will vary based on the type of system you’re peering with for BGP. In my example, it’s a Linux VM running quagga and all of the necessary configuration is in the
cat /etc/quagga/bgpd.conf ! ! Zebra configuration saved from vty ! 2019/12/31 11:13:11 ! hostname bgpd password VMware1! enable password VMware1! log file /var/log/quagga/quagga.log ! router bgp 65002 bgp router-id 192.168.100.1 neighbor 192.168.100.3 remote-as 65001 neighbor 192.168.100.3 default-originate neighbor 192.168.100.4 remote-as 65001 neighbor 192.168.100.4 default-originate neighbor 192.168.200.3 remote-as 65011 neighbor 192.168.200.3 default-originate neighbor 192.168.200.4 remote-as 65011 neighbor 192.168.200.4 default-originate neighbor 192.168.200.5 remote-as 65011 neighbor 192.168.200.5 default-originate neighbor 192.168.210.3 remote-as 65012 neighbor 192.168.210.3 default-originate neighbor 192.168.210.4 remote-as 65012 neighbor 192.168.210.4 default-originate maximum-paths 4 ! line vty !
We’ll add some lines near the end for the two IP addresses with which we want to peer:
neighbor 192.168.100.192 remote-as 65022 neighbor 192.168.100.192 default-originate neighbor 192.168.100.194 remote-as 65022 neighbor 192.168.100.194 default-originate
The last step at this end is to restart the quagga service via the
service quagga restart command.
With this all in play we can now create a LoadBalancer service, similar to the following for a WordPress application I had running:
apiVersion: v1 kind: Service metadata: name: wordpress labels: app: wordpress spec: ports: - port: 80 selector: app: wordpress tier: frontend type: LoadBalancer loadBalancerIP: 10.40.14.40
When we check on this service after it’s been created, we can see that it has the external IP address specified (10.40.14.40) from the range that we configured in the MetalLB ConfigMap:
kubectl -n wordpress get svc wordpress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress LoadBalancer 100.65.202.102 10.40.14.40 80:30347/TCP 40m
And thanks to using BGP, we can see that routing has been configured on my Linux system via BGP (the output below is truncated):
netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.40.14.32 192.168.100.192 255.255.255.224 UG 0 0 0 eth1
You can deploy MetalLB into each cluster you create but be sure to choose a different subnet for LoadBalancer IP addresses for each deployment or you could end up with IP address conflicts.
MetalLB might not be the best fit for your TKG on vSphere deployment so be sure to look into other options. In addition to MetalLB, you might consider HAProxy, Avi Networks (now a part of VMware), NGinx, NSX Container Plugin (when it’s supported) and many others. If you are deploying TKG to AWS, you’ll have built-in LoadBalancer support.