How to Configure Fluent Bit and Splunk in Tanzu Kubernetes Grid

Tanzu Kubernetes Grid provides several different Fluent Bit manifest files to help you deploy and configure Fluent Bit for use with Splunk, Elastic Search, Kafka and a generic HTTP endpoint. In this post, I’ll walk though not only the Fluent Bit configuration which VMware has documented but the deployment of Splunk in a TKG cluster.

In addition to the tkg cli utility and the OVA files needed to stand up a TKG cluster, you’ll need to download the extensions file as well from https://my.vmware.com/web/vmware/downloads/details?downloadGroup=TKG-112&productId=988&rPId=46507. This contains the manifest files needed to deploy/configure the authentication, log forwarding and ingress solutions that VMware supports for TKG.

I’ve made use of MetalLB to provide LoadBalancer functionality (per my previous blog post, How to Deploy MetalLB with BGP in a Tanzu Kubernetes Grid 1.1 Cluster) but you could also use a NodePort service (which is the default for the Fluent Bit manifests that VMware ships) or use an Ingress resource.

I have a storage class named k8s-policy, which maps to an NFS volume mounted to all of my ESXi hosts, and accessible to Kubernetes via vSphere CNS.

Once you have a TKG cluster up and running, the first step will be to extract the contents of the extensions bundle. You should see a folder structure similar to the following:

ls tkg-extensions-v1.1.0/
authentication  cert-manager  ingress  logging

We’ll be working in the logging section so you can start to focus on the manifest files in this location and its sub-directories.

The first step to deploying Fluent Bit in a TKG cluster is to create the tanzu-system-logging namespace and the needed RBAC components. These steps are the same regardless of the logging backend.  You can read more about this step in detail at Create Namespace and RBAC Components.  You should inspect the manifest files prior to applying them to make sure that you’re okay with the RBAC objects being created.

kubectl apply -f tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/00-fluent-bit-namespace.yaml
kubectl apply -f tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/01-fluent-bit-service-account.yaml
kubectl apply -f tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/02-fluent-bit-role.yaml
kubectl apply -f tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/03-fluent-bit-role-binding.yaml

There are a number of ways to deploy Splunk but I decided to go with the Splunk Operator as it provided an incredibly easy way of getting Splunk up and running in a hurry. The first step was to install the operator itself. You should note that I chose to place all components in the tanzu-system-logging namespace only as my personal preference…there is no requirement to do so.

kubectl -n tanzu-system-logging apply -f http://tiny.cc/splunk-operator-install

Once deployed, you should see a single pod running in the tanzu-system-logging namespace:

kubectl -n tanzu-system-logging get po
NAME                              READY   STATUS    RESTARTS   AGE
splunk-operator-9b69c49bf-j8fn8   1/1     Running   4          4m39s

Now we’re ready to create a Splunk deployment. This makes use of a CRD called standalones.enterprise.splunk.com which was created when we deployed the operator. I very-slightly changed the instructions provided for this step so that I could save the configuration in a yaml file in case I needed it later:

echo "apiVersion: enterprise.splunk.com/v1alpha2
kind: Standalone
metadata:
  name: s1
  namespace: tanzu-system-logging
  finalizers:
  - enterprise.splunk.com/delete-pvc" > tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/output/splunk/splunk.yaml

kubectl apply -f tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/output/splunk/splunk.yaml

We should now have a number of objects created in the tanzu-system-logging namespace:

kubectl -n tanzu-system-logging get po,svc,pvc,pv,secrets,cm
NAME                                  READY   STATUS    RESTARTS   AGE
pod/splunk-operator-9b69c49bf-j8fn8   1/1     Running   4          8m10s
pod/splunk-s1-standalone-0            1/1     Running   5          1m02s

NAME                                    TYPE           CLUSTER-IP       EXTERNAL-IP               PORT(S)                                                                                                      AGE
service/splunk-operator-metrics         ClusterIP      100.66.160.119   <none>                    8383/TCP,8686/TCP                                                                                            8m09s
service/splunk-s1-standalone-headless   ClusterIP      None             <none>                    8000/TCP,8088/TCP,8089/TCP,9000/TCP,9997/TCP,17000/TCP,19000/TCP                                             1m01s

NAME                                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pvc-etc-splunk-s1-standalone-0   Bound    pvc-dd464443-e69f-426d-8e7b-9474cdb5e2eb   10Gi       RWO            k8s-policy     1m08s
persistentvolumeclaim/pvc-var-splunk-s1-standalone-0   Bound    pvc-4e09bf11-9041-4390-826a-e47846a2ecef   100Gi      RWO            k8s-policy     1m08s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                 STORAGECLASS   REASON   AGE
persistentvolume/pvc-4e09bf11-9041-4390-826a-e47846a2ecef   100Gi      RWO            Delete           Bound    tanzu-system-logging/pvc-var-splunk-s1-standalone-0   k8s-policy              1m11s
persistentvolume/pvc-dd464443-e69f-426d-8e7b-9474cdb5e2eb   10Gi       RWO            Delete           Bound    tanzu-system-logging/pvc-etc-splunk-s1-standalone-0   k8s-policy              1m12s

NAME                                  TYPE                                  DATA   AGE
secret/splunk-operator-token-kf2qn    kubernetes.io/service-account-token   3      8m09s
secret/splunk-s1-standalone-secrets   Opaque                                6      1m07s

NAME                                        DATA   AGE
configmap/splunk-operator-lock              0      8m08s

To make my life easier, I chose to expose the splunk-s1-standalone-headless services as a LoadBalancer and created a DNS record for it. This allows for accessing the Splunk UI by an FQDN and not having to worry about any underlying IP address changes breaking my access:

kubectl -n tanzu-system-logging expose service splunk-s1-standalone-headless --name splunk --type=LoadBalancer --external-ip=10.40.14.35 --labels=app=splunk

After creating a DNS record mapping splunk.corp.local to 10.40.14.35, I was able to access the Splunk UI at https://splunk.corp.local:8000:

The default username for logging in to Splunk is admin but the password is randomly generated and must be obtained from the splunk-s1-standalone-secrets secret.

kubectl -n tanzu-system-logging get secret splunk-s1-standalone-secrets -o jsonpath='{.data.password}' | base64 --decode
zB6DD5odyeK1sc8mCiQmt7rm

Now we can log in with the username of admin and the password of zB6DD5odyeK1sc8mCiQmt7rm.

Once logged in, we need to collect the Splunk Token Value as we’ll need to provide it to Fluent Bit so that we can get logs forwarded to Splunk. Navigate to Settings, Data Inputs and click on HTTP Event Collector and then copy the displayed Token Value.

We still have a few more Fluent Bit pieces to configure before we can make use of Splunk.

The tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/output/splunk/04-fluent-bit-configmap.yaml file has the instance-specific configuration information that we’ll need to provide to allow Fluent Bit to forward logs to our Splunk deployment. In my example, I am changing the following from the default values:

  • Setting the Cluster name to vsphere-test (since that’s the name of my TKG workload cluster).
  • Setting the Instance name to vsphere (this was an arbitrary choice).
  • Setting the Splunk host name to splunk-s1-standalone-headless, the name of the Splunk service.
  • Setting the Splunk port to 8088 (this is where Splunk listens for incoming data).
  • Setting the Splunk token to 0B5BCCA8-B93C-4A9B-2BB7-06E3062A9B5A, the value we retrieved earlier in the Splunk UI.
sed -i 's/<TKG_CLUSTER_NAME>/vsphere-test/' tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/output/splunk/04-fluent-bit-configmap.yaml
sed -i 's/<TKG_INSTANCE_NAME>/vsphere/' tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/output/splunk/04-fluent-bit-configmap.yaml
sed -i 's/<SPLUNK_HOST>/splunk-s1-standalone-headless/' tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/output/splunk/04-fluent-bit-configmap.yaml
sed -i 's/<SPLUNK_PORT>/8088/' tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/output/splunk/04-fluent-bit-configmap.yaml
sed -i 's/<SPLUNK_TOKEN>/0B5BCCA8-B93C-4A9B-2BB7-06E3062A9B5A/' tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/output/splunk/04-fluent-bit-configmap.yaml

The only other file we need to be concerned with is tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/output/splunk/04-fluent-bit-configmap.yaml but it’s fine in the default configuration.

Now we can deploy Fluent Bit and see what data is getting sent to Splunk:

kubectl apply -f tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/output/splunk/04-fluent-bit-configmap.yaml
kubectl apply -f tkg-extensions-v1.1.0/logging/fluent-bit/vsphere/output/splunk/05-fluent-bit-ds.yaml

You can validate that data is being received by Splunk in the Splunk UI by navigating to Search & Reporting. You should see a page similar to the following:

You can click the Data Summary button and then on the splunk-s1-standalone-headless:8088 link to drill down into what is being received:

In my next post I’ll walk through a similar configuration but using Kafka as the log receiver.

1 thought on “How to Configure Fluent Bit and Splunk in Tanzu Kubernetes Grid”

  1. Pingback: How to Configure Fluent-Bit to NOT use TLS with Splunk in TKG 1.2 – Little Stuff

Leave a Comment

Your email address will not be published. Required fields are marked *