Skip to content
This repository has been archived by the owner on Apr 29, 2022. It is now read-only.
/ k8sensus Public archive

Simple Kubernetes leader election. ๐Ÿ—๏ธ๐Ÿšข๐Ÿšฆ

License

Notifications You must be signed in to change notification settings

burntcarrot/k8sensus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

13 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿ—๏ธ k8sensus

Leader election process for Kubernetes using the Kubernetes client-go client.

Preview

Table of Contents

Why?

To make systems more fault-tolerant; handling failures in replicas is crucial for higher availability. A leader election process ensures that if the leader fails, the candidate replicas can be elected as the leader.

How does it work?

An overview on how it works:

  • Start by creating a lock object.
  • Make leader update/renew lease. (inform other replicas about its leadership)
  • Make candidate pods check the lease object.
  • If leader fails, re-elect new leader.

Installation

By cloning the repo:

git clone https://github.com/burntcarrot/k8sensus
cd k8sensus
kubectl apply -f k8s/rbac.yaml
kubectl apply -f k8s/deployment.yaml

By copying the deployment and RBAC definitions:

kubectl apply -f k8s/rbac.yaml
kubectl apply -f k8s/deployment.yaml

Usage

A complete example on how to use k8sensus is described here.

Local Development

There are two commands exposed by the Makefile:

For applying definitions:

make apply

For cleaning up k8sensus deployment:

make clean

Production Usage

If you like challenges and love debugging on Friday nights, then, please feel free to use it on your production cluster. ๐Ÿ˜‹

Non-satirical note: Do not use in production.

What did I learn?

After hours of debugging and opening up 20 tabs of documentation, here's what I learnt:

  • Kubernetes has a leaderelection package in its client.
  • After reading the first line in the documentation, I was a bit disappointed:

This implementation does not guarantee that only one client is acting as a leader (a.k.a. fencing).

  • This made me write this code, I wanted a single-leader workflow.
  • For interacting, we can use CoordinationV1 to get the client. (docs)
  • leaderelection (under client-go) provides a LeaseLock type (docs), which can be used for the leader election. (leaders renew time in the lease)
  • leaderelection also provides LeaderCallbacks (docs) which can be used for handling leader events like logging when a new pod/replica gets elected as the new leader, etc.