Skip to content

tldr-devops/k0s-in-docker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

K0S in Docker Swarm (and Compose)

#StandWithBelarus Voices From Belarus Stand With Ukraine

Based on k0s-in-docker.

Status: experimental, it works but upgrade\rollback of controller and any deployments over basic setup hasn't tested yet.

Pro:

  • Easier management and rolling updates of control components with Docker Swarm, including automatic migration to other hosts in case of failure.

Const:

  • Better would be start kubelet (k0s worker) directly on the host, without Docker. This is important because restarting kubelet within Docker could potentially impact all the pods running inside the Docker container.
  • If you want to achieve live migration of k0s control containers between nodes, you need to set up and manage data path or volume storage synchronization between control nodes.

Alternatively, you can consider running Kubernetes within Kubernetes using the following projects:

Time track:

About the Author

Hello, everyone! My name is Filipp, and I have been working with high load distribution systems and services, security, monitoring, continuous deployment and release management (DevOps domain) since 2012.

One of my passions is developing DevOps solutions and contributing to the open-source community. By sharing my knowledge and experiences, I strive to save time for both myself and others while fostering a culture of collaboration and learning.

I had to leave my home country, Belarus, due to my participation in protests against the oppressive regime of dictator Lukashenko, who maintains a close affiliation with Putin. Since then, I'm trying to build my life from zero in other countries.

If you are seeking a skilled DevOps lead or architect to enhance your project, I invite you to connect with me on LinkedIn or explore my valuable contributions on GitHub. Let's collaborate and create some cool solutions together :)

How You Can Support My Projects

There are a couple of ways you can support my projects:

  • Sending Pull Requests (PRs):
    If you come across any improvements or suggestions for my configurations or texts, feel free to send me Pull Requests (PRs) with your proposed changes. I appreciate your contributions <3

  • Making Donations:
    If you find my projects valuable and would like to support them financially, you can make a donation. Your contributions will go towards further development, maintenance, and improvement of the projects. Your support is greatly appreciated and helps to ensure the continued success of the projects.

Thank you for considering supporting my work. Your involvement and contributions make a significant difference in the growth and success of my projects.

Setup

Docker Compose

  1. change directory
cd compose
  1. generate secrets (only once per cluster). This command populate ./secrets directory near yaml files and exit with code 0
docker-compose -f generate-secrets.yml up
docker-compose -f generate-secrets.yml down
echo "externalAddress=$(hostname -i)" >> .env
  1. start controller
docker-compose -f controller.yml up -d

Wait untill all k0s containers up and running

docker-compose -f controller.yml ps
docker-compose -f controller.yml exec k0s-1 k0s kubectl get --raw='/livez?verbose'

Optional create worker join token if you don't use static pregenerated one docker-compose -f controller.yml exec k0s-1 k0s token create --role worker > ./secrets/worker.token

  1. start kubelet

Load necessary kernel modules for Calico if you use it

modprobe ipt_ipvs xt_addrtype ip6_tables \
ip_tables nf_conntrack_netlink xt_u32 \
xt_icmp xt_multiport xt_set vfio-pci \
xt_bpf ipt_REJECT ipt_set xt_icmp6 \
xt_mark ip_set ipt_rpfilter \
xt_rpfilter xt_conntrack

Start Kubelet

docker-compose -f kubelet.yml up -d

Docker Swarm

  1. change directory
cd swarm
  1. generate secrets (only once per cluster). This command populate ./secrets directory near yaml files and exit with code 0
docker-compose -f generate-secrets.yml up
docker-compose -f generate-secrets.yml down
echo "externalAddress=$(hostname -i)" >> .env
echo "stackName=k0s" >> .env
  1. setup docker swarm
docker swarm init --advertise-addr $(hostname -i)
  1. start controller
export $(grep -v '^#' .env | xargs -d '\n')
docker stack deploy --compose-file controller.yml "$stackName"

Deploy haproxy in compose on each controller host

docker-compose -f haproxy.yml up -d

(kube-proxy doesn't work properly on the same host with haproxy in swarm mode)

Wait untill all k0s containers up and running

docker stack ps "$stackName"
docker service ls
  1. start kubelet

Load necessary kernel modules for Calico if you use it

modprobe ipt_ipvs xt_addrtype ip6_tables \
ip_tables nf_conntrack_netlink xt_u32 \
xt_icmp xt_multiport xt_set vfio-pci \
xt_bpf ipt_REJECT ipt_set xt_icmp6 \
xt_mark ip_set ipt_rpfilter \
xt_rpfilter xt_conntrack

Docker Swarm doesn't support privileged mode so run it with Compose

docker-compose -f kubelet.yml up -d

Known problems

* ETCD health status delay

ETCD cluster need some time (5 mins?) to detect hard powered off member.

* Scale k0s from 1 container to 3 and more

Single etcd node should be at least restarted with new parameters to become a cluster with two other nodes.

* Docker Swarm doesn't resolve IP of container into DNS name

Docker Swarm doesn't resolve IP of container into DNS name while container health is not healthy. As we need DNS resolution for getting containers up, I disabled healthcheck =(

* ETCD in Docker Swarm produce remote error: tls: bad certificate

ETCD peer certificate contain only one ip while in swarm container name resolved into other service ip. Issue: k0sproject/k0s#3318. Solution: use deploy.endpoint_mode: dnsrr config.

* Worker kubeproxy & coredns work only with network 'bridge' or 'host'

    #deploy:
      #mode: replicated
      #replicas: 3
    volumes:
      - /var/lib/k0s
      #- /var/lib/k0s/etcd
      #- /var/lib/k0s/containerd # for worker
    tmpfs:
      - /run
      - /var/run
    network_mode: "bridge"

* k0s Manifest Deployer load manifests only from first level directories under /var/lib/k0s/manifests

You can check that controller imported all secrets for tokens: k0s kubectl get secrets -A.

* Error: configmaps "worker-config-default-1.27" is forbidden: User "system:node:<NODENAME>" cannot get resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '<NODENAME>' and this object

Problem with Node Authorization. I got this problem with pregenered worker token and fixed it by generating new join token after controller start.

* error mounting "cgroup" to rootfs at "/sys/fs/cgroup"

Don't mount /sys:/sys:ro

* Calico node failed with error felix/table.go 1321: Failed to execute ip(6)tables-restore command error=exit status 2 errorOutput="iptables-restore v1.8.4 (legacy): Couldn't load match rpfilter':No such file or directory\n\nError occurred at line: 10\nTry iptables-restore -h' or 'iptables-restore --help' for more information.\n"

Possibly xt_rpfilter kernel module is missing, you can check with this command in kubelet container: calicoctl node checksystem

* Kube-proxy failed with some iptables error

Possibly some kernel modules missed or maybe kube-proxy version can't work with your system. I got something like this with k0s v1.27.1-k0s.0 on old boot-to-docker vm with 4.19.130-boot2docker kernel. However, looks like k0s v1.26.4-k0s.0 works fine.

https://www.tencentcloud.com/document/product/457/51209

Useful commands

Controller containers contain tools like kubectl and etcdctl for managing cluster. You can use it for debug or control with admin privileges:

* Get k0s cluster health status

k0s kubectl get --raw='/livez?verbose'

* Get all existed stuff in cluster

k0s kubectl get all -A
k0s kubectl get configmaps -A
k0s kubectl get secrets -A
k0s kubectl get nodes -A
k0s kubectl get namespaces -A
k0s kubectl get ingress -A
k0s kubectl get jobs -A
k0s kubectl get endpoints -A
k0s kubectl get users,roles,rolebindings,clusterroles -A
k0s kubectl get events -A

* Check k0s dynamic config (if enabled)

k0s config status

* Get member list of etcd cluster

etcdctl --endpoints=https://127.0.0.1:2379 \
--key=/var/lib/k0s/pki/etcd/server.key \
--cert=/var/lib/k0s/pki/etcd/server.crt \
--insecure-skip-tls-verify --write-out=table \
member list

or k0s etcd member-list

* Get etcd member endpoint status and health

etcdctl --endpoints=https://127.0.0.1:2379 \
--key=/var/lib/k0s/pki/etcd/server.key \
--cert=/var/lib/k0s/pki/etcd/server.crt \
--insecure-skip-tls-verify --write-out=table \
endpoint status
etcdctl --endpoints=https://127.0.0.1:2379 \
--key=/var/lib/k0s/pki/etcd/server.key \
--cert=/var/lib/k0s/pki/etcd/server.crt \
--insecure-skip-tls-verify --write-out=table \
endpoint health

About

Run k0s cluster in docker compose or swarm

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published