Problem

When a team of people are working on a Kubernetes cluster it can be difficult not to overwrite each others config. when each individual applies config to the cluster they might overwrite or delete config applied by others. What are the solutions that can combat this problem?

Solution 1

Always apply kuberntes config through a CI/CD system where a build agent runs kubectl commands leveraging a centralised git repo. Users push to this repo and a corresponding CI/CD pipeline is triggered to apply changes. But this approach might not work in all cases. This approach doesn’t have a quick feedback loop. Users have to wait for the build agent to complete a sequnce of steps like cloning a repo fetching Kubernetes credentials and finally applying.

Solution 2

Always download the current state of kuberntes cluster before starting to work on it. If the entire team follows this practice conflicting changes can be minimised. Ofcourse there is a risk of conflict when two of the team members start to work on a kuberntes cluster at the same time. Hence this approach is not suitable for big teams. Lets look at how such a solution can be implemented. A shell script with carefully crafted kubectl commands can be used to download the current state of a kuberntes cluster as organised yaml files.The following shell script will download all objects and organises them in folders per namespace. It ignores system objects and credentials like kube-system, secrets.

Disclaimer: This shell script is inspired from of a script from stack overflow. Unfortunately I lost the link to original script.

#!/bin/bash
### A script that downloads all objects from a K8 cluster and saves them in a directory per namespace.
### It omits all objects of type secret.

i=$((0))
for n in $(kubectl get -o=custom-columns=NAMESPACE:.metadata.namespace,KIND:.kind,NAME:.metadata.name pv,pvc,Role,configmap,sa,RoleBinding,ClusterRoleBinding,ClusterRole,ingress,service,deployment,ds,statefulset,hpa,job,cronjob --all-namespaces | grep -v 'secrets/default-token')
do
	if (( $i < 1 )); then
		namespace=$n
		i=$(($i+1))
		if [[ "$namespace" == "PersistentVolume" ]]; then
			kind=$n
			i=$(($i+1))
		fi
	elif (( $i < 2 )); then
		kind=$n
		i=$(($i+1))
	elif (( $i < 3 )); then
		name=$n
		i=$((0))
		echo "saving ${namespace} ${kind} ${name}"
		if [[ "$namespace" != "NAMESPACE" ]]; then
			mkdir -p $namespace
			kubectl get $kind -o=yaml $name -n $namespace > $namespace/$kind.$name.yaml
		fi
	fi
done

Lets take a look at key commands

for n in $(kubectl get -o=custom-columns=NAMESPACE:.metadata.namespace,KIND:.kind,NAME:.metadata.name pv,pvc,Role,configmap,sa,RoleBinding,ClusterRoleBinding,ClusterRole,ingress,service,deployment,ds,statefulset,hpa,job,cronjob --all-namespaces | grep -v 'secrets/default-token')```

This command gets the names of objects from all namespaces ignoring objects of type secret.

mkdir -p $namespace
kubectl get $kind -o=yaml $name -n $namespace > $namespace/$kind.$name.yaml

These commands create direcotries with namespace names and gets inividual objects from the names fetched earlier and saves them as yaml files.

Download this script from here

The result is a dump of current state of the cluster in yaml files that is ready to be modified and reapplied.

Solution 3

There can be a hybrid solution which is a mix of solutions 1 & 2 . What if there is a central repository that is loosely in sync with cluster state. A periodic job at appropriate intervals syncronises cluster state into a repository. When there are planned changes to cluser config , these changes are commited to repository and applied via build agent’s pipeline. When there are hot fixes or quick changes needing immediate feedback, users can clone this repository and apply changes directly to the cluster. Rest assured the periodic job will syncronise the repository with latest changes. The script above can be used by a periodic job to download and push cluster state to a centralised repository.

Solution 4

Kubernetes community has just released version 1.18 with a feature called server side apply. This feature should address a lot of problems mentioned above. It is an early stage feature that requires further testing.