Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod deploying on master and not on node #1402

Closed
DhruvM1994 opened this issue Feb 9, 2020 · 12 comments
Closed

Pod deploying on master and not on node #1402

DhruvM1994 opened this issue Feb 9, 2020 · 12 comments

Comments

@DhruvM1994
Copy link

DhruvM1994 commented Feb 9, 2020

Version:
1.17

Describe the bug
I am trying to deploy a k3s cluster on two Raspberry Pi computers. Thereby, I would like to use the Rapsberry Pi 4 as the master/server of the cluster and a Raspberry Pi 3 as a worker node/agent.
However, when I try to make a deployment the pod is always deployed on the Raspberry Pi 4 (master) and not on the worker node.

To Reproduce
On both computers:

  • Insert cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory into /boot/cmdline.txt

On master Raspberry Pi 4:
curl -sfL https://get.k3s.io | sh -
sudo kubectl run nginx-sample -image nginx --port 80

On worker node Raspberry Pi 3:
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -

Expected behavior
The command sudo kubectl get pods -o wide --all-namespaces should show that the pod is deployed on the worker node.

Actual behavior
The command sudo kubectl get pods -o wide --all-namespaces shows that the pod is deployed on the master node.

SampleDeployment

Additional context
The command sudo kubectl get nodesshows both nodes.

Pod

When trying to use the Raspberry Pi 3 as the master and the Raspberry Pi 4 as the node, then the pod is deployed on the worker node.

@carpenike
Copy link

Did you taint the master node? I believe by default k3s will not taint it.

kubectl label --overwrite node <MASTER> node-role.kubernetes.io/master=true:NoSchedule

@DhruvM1994
Copy link
Author

@carpenike I did not specify it explicitly.
However, when I use the Raspberry Pi 3 as the master node and the Rasperry Pi 4 as the worker node, then the deployment is deployed on the worker node, as expected.

@Kerwood
Copy link

Kerwood commented Feb 11, 2020

@carpenike If you taint the node master node like that, what happens if you delete the coredns pod? It wouldn't be able to reschedule on the master node again, would it ?

@carpenike
Copy link

@Kerwood -- correct. You'd have to use a toleration within a deployment to allow deployments on the master node. Also, I mis-wrote that. It's taint, not label.

kubectl taint--overwrite node <MASTER> node-role.kubernetes.io/master=true:NoSchedule

My suspicion is that it's choosing the node with the most available resources when doing the node selection, and choosing the Pi4.

@Kerwood
Copy link

Kerwood commented Feb 12, 2020

I have been trying different things to achieve the same the last couple of days.
Here's my solution. (Which in my opinion should be default)

1.Taint the master with below command.

kubectl taint node dev-k3s-master k3s-controlplane=true:NoSchedule

2. Add tolerance on the control-plane services.

kubectl edit deployments local-path-provisioner -n kube-system

And add the following to the containers spec.

spec:
  ...
  template:
  ...
    spec:
    ...
      tolerations:
      - effect: NoExecute
        operator: Exists
      - effect: NoSchedule
        operator: Exists

Do the same for metrics-server and coredns. The latter will have will have tolerations: present, so just add the two effects to the list.

@keslerm
Copy link

keslerm commented Feb 14, 2020

It would be nice to have the install script recognize this setup (e.g. --no-schedule-master or some kind of flag) that would grant these the appropriate tolerations automatically.

@alekc
Copy link

alekc commented Apr 6, 2020

For the sake of completeness. Merged request above (https://github.com/rancher/k3s/pull/1275/files) takes care of this, however (at the time of writing) it's not yet released. On older versions you can "future proof" the behaviour of the merge request by creating a patch.yaml file with following contents

spec:
  template:
    spec:
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
        - key: "node-role.kubernetes.io/master"
          operator: "Exists"
          effect: "NoSchedule"

and running

kubectl patch deployment metrics-server -n kube-system --patch "$(cat patch.yaml)"
kubectl patch deployment coredns -n kube-system --patch "$(cat patch.yaml)"
kubectl patch deployment local-path-provisioner -n kube-system --patch "$(cat patch.yaml)"

from the same folder.

Master node must have a following taint node-role.kubernetes.io/master=true:NoSchedule which you can apply either on install time or with override command mentioned above.

@onedr0p
Copy link
Contributor

onedr0p commented Apr 15, 2020

@alekc It is my understanding that if the k3s-master is rebooted those will get overwritten or am I wrong?

@izeau
Copy link

izeau commented May 2, 2020

To add up on @alekc's answer, I also had to patch the service load balancer:

$ kubectl patch daemonset svclb-traefik -n kube-system --patch "$(cat patch.yaml)"

@alekc
Copy link

alekc commented May 20, 2020

Right, I've been deploying with --no-deploy=traefik flag so didn't have that one.

@DhruvM1994
Copy link
Author

DhruvM1994 commented Sep 8, 2020

I attached a label to the node and used the nodeSelector entry to specify on which node to deploy.

See: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

@franciscosuca
Copy link

Try this command to avoid pods on master node
sudo kubectl taint node <your_master_node> node-role.kubernetes.io/master:NoSchedule
To deploy to a specific node try nodeSelector and for several nodes try nodeAffinity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants