Kubernetes
April 12, 2022

Why everyone should track Kubernetes changes and top four ways to do so

Imagine the following: you plug in an electric kettle, a fuse blows, and the power goes out. You obviously suspect the kettle.

Now, imagine a different scenario: you work in an electrical store where shoppers can plug in any device they wish to try out. Suddenly the power goes out. You’d love to know which devices were just plugged in, right?

The above is a metaphor for Kubernetes. In most companies, people are constantly deploying changes and rolling out new versions. Unfortunately, there is no centralized view to see “what devices were just plugged in”.

ArgoCD and Gitops aren't change tracking

ArgoCD is an excellent tool for deploying applications, but it isn't a full change tracking solution.

It's too easy to bypass gitops and make manual changes with kubectl edit

Tracking changes in Kubernetes

We took KubeWatch and added on an extra layer for common use cases.

There are four variations to this, depending on where you send that data.

Option 1: UI for Kubernetes change history

This is the easiest to setup. You run one Helm command to install Robusta and it's bundled Prometheus stack. Now you have a single dashboard with all changes and alerts across your clusters.

Option 2: Grafana Dashboards

Let's take existing Grafana dashboards and use annotations to show when applications were updated:

This takes eight lines of YAML to configure with Robusta:

customPlaybooks:
  - triggers:  
      - on_deployment_update: {}
    actions:  
      - add_deployment_lines_to_grafana:      
      		grafana_api_key: '********'     
          grafana_dashboard_uid: 09ec8aa1e996d6ffcd6817bbaff4db1b      
          grafana_url: http://grafana.namespace.svc

This works by connecting the on_deployment_update trigger to the add_deployment_lines_to_grafana action.

Option 3: Slack notifications

This is the same as above, but we're sending the result to Slack and not Grafana.

Here is the YAML configuration for this:

customPlaybooks:
	- triggers:  
  		- on_deployment_update: {}  
   	actions:
    	- resource_babysitter: {}
   	sinks:  - slack

This works by connecting the on_deployment_update trigger to the resource_babysitter action and sending the result to the Slack sink. You could just as easily send the output to MS Teams, Telegram, DataDog, OpsGenie, or any other supported sink.

Solution 4: Reverse GitOps

This one is a little unusual, but we can send the same change data to a git repository.

Usually git repositories are used with GitOps as the source of truth for YAML files. However, git is also a convenient way to store audit data about what actually changed in your cluster. We can use git to audit every change in your cluster.

Every time someone makes a change, whether it's an ad-hoc change or a planned deployment, it's written to a git repository at a path determined by the cluster name, namespace, and resource type.

Here is how we configure this Robusta automation:

Like the other examples, we're hooking up a trigger to an action. The trigger is more broad here, as it is on_kubernetes_any_resource_all_changes. The action is git_change_audit.

Summary

Hopefully this will help you setup change tracking on your Kubernetes cluster. Good luck!

Subscribe to receive articles directly in your inbox