Leader election process for Kubernetes using the Kubernetes client-go
client.
To make systems more fault-tolerant; handling failures in replicas is crucial for higher availability. A leader election process ensures that if the leader fails, the candidate replicas can be elected as the leader.
An overview on how it works:
- Start by creating a lock object.
- Make leader update/renew lease. (inform other replicas about its leadership)
- Make candidate pods check the lease object.
- If leader fails, re-elect new leader.
By cloning the repo:
git clone https://github.com/burntcarrot/k8sensus
cd k8sensus
kubectl apply -f k8s/rbac.yaml
kubectl apply -f k8s/deployment.yaml
By copying the deployment and RBAC definitions:
kubectl apply -f k8s/rbac.yaml
kubectl apply -f k8s/deployment.yaml
A complete example on how to use k8sensus is described here.
There are two commands exposed by the Makefile
:
For applying definitions:
make apply
For cleaning up k8sensus deployment:
make clean
If you like challenges and love debugging on Friday nights, then, please feel free to use it on your production cluster. ๐
Non-satirical note: Do not use in production.
After hours of debugging and opening up 20 tabs of documentation, here's what I learnt:
- Kubernetes has a leaderelection package in its client.
- After reading the first line in the documentation, I was a bit disappointed:
This implementation does not guarantee that only one client is acting as a leader (a.k.a. fencing).
- This made me write this code, I wanted a single-leader workflow.
- For interacting, we can use
CoordinationV1
to get the client. (docs) leaderelection
(underclient-go
) provides aLeaseLock
type (docs), which can be used for the leader election. (leaders renew time in the lease)leaderelection
also providesLeaderCallbacks
(docs) which can be used for handling leader events like logging when a new pod/replica gets elected as the new leader, etc.