Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. #3

Closed
calebhailey opened this issue Dec 29, 2019 · 9 comments

Comments

@calebhailey
Copy link
Owner

I vaguely recalled reading about Kubernetes Taints as a concept when I was first learning about K8s, but I quickly ignored it as something I wouldn't have to deal with unless – for some crazy reason – I decided to run my own K8s cluster someday. Huzzah! 😆

Fast forward to today and the concept makes perfect sense – I just skipped a step when setting up my single-node cluster.

@calebhailey
Copy link
Owner Author

calebhailey commented Dec 29, 2019

Troubleshooting was easy... kubectl describe node <node> revealed the taint, and figuring out the required configuration change was simple.

$ kubectl get nodes                                                                                                         
NAME      STATUS   ROLES    AGE   VERSION                                                                                            
homelab   Ready    master   23h   v1.17.0

$ kubectl describe node homelab
Name:               homelab
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=homelab
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock                                             
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true                                                 
CreationTimestamp:  Sat, 28 Dec 2019 13:28:27 -0800
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false

And there it is – by default, kubeadm init configured this node as a Kubernetes master, which would normally take care for managing other Kubernetes "worker" (or "non-master") nodes. The Kubernetes Concepts documentation describes the distinction between the Kubernetes master and non-master nodes as follows:

  • The Kubernetes Master is a collection of three processes that run on a single node in your cluster, which is designated as the master node. Those processes are: kube-apiserver, kube-controller-manager and kube-scheduler.
  • Each individual non-master node in your cluster runs two processes:
    • kubelet, which communicates with the Kubernetes Master.
    • kube-proxy, a network proxy which reflects Kubernetes networking services on each node.

So anyway, as soon as I saw node-role.kubernetes.io/master:NoSchedule I began nodding my head, realizing what the issue was. One Google search returned me straight back to the very installation guide I followed I had skimmed over, and the instruction I had skipped:

Control plane node isolation
By default, your cluster will not schedule pods on the control-plane node for security reasons. If you want to be able to schedule pods on the control-plane node, e.g. for a single-machine Kubernetes cluster for development, run:

kubectl taint nodes --all node-role.kubernetes.io/master-

With output looking something like:

node "test-01" untainted
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found

This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the control-plane node, meaning that the scheduler will then be able to schedule pods everywhere.

So... one quick kubectl taint nodes --all node-role.kubernetes.io/master- command later, and my single-node K8s cluster was now actually useful for running pods!

NOTE: there's a LOT more output from kubectl describe node <node> that this; I'm just trimming the rest for brevity; all we needed was this clue about the configured Taints.

@alexellis
Copy link

I realise this is an old issue now, but you may like https://k3s.io and https://k3sup.dev - by default k3s uses a lot less resources and untaints the master (called a server).

@calebhailey
Copy link
Owner Author

Thanks, @alexellis! This project was mainly for the purposes of learning more about k8s internals. This issue was more of a "feature" than a bug in the context of my homelab. I knew about taints, but I hadn't encountered them in any of my k8s usage (mostly hosted K8s, like GKE).

Having said that, I've been wanting to give K3s a try, so I'll probably do that soon!

Cheers 🍻

@Anushamobis
Copy link

Thanks a lot for posting this ....this was a real savior!!!

@didip
Copy link

didip commented Jul 13, 2020

Ha! I was having this exact same problem with my homelab as well. Thanks, mate!

@EPALKAA
Copy link

EPALKAA commented Mar 19, 2021

Thanks calebhailey...Its really help to resolve my problem

@sanzenwin
Copy link

sanzenwin commented May 12, 2021

@calebhailey,thanks a lot, it is really helpful. 🥇

@sodared
Copy link

sodared commented Nov 3, 2021

牛批

@akaPipo
Copy link

akaPipo commented Aug 4, 2022

Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants