-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using the playbook scale.yml to scale out cluster worker nodes will restart kube-proxy. #11272
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What happened?
I noticed that using the playbook scale.yml to scale out the cluster worker nodes will restart kube-proxy.
The corresponding task: https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubernetes/kubeadm/tasks/main.yml#L204
Is it necessary to restart kube-proxy in the scenario where only worker nodes are being added?
What did you expect to happen?
In the case of only scaling out worker nodes, I don't think it is necessary to restart kube-proxy, so I believe this scenario can be optimized.
How can we reproduce it (as minimally and precisely as possible)?
This can be reproduced by using the playbook scale.yml to scale out the cluster worker nodes.
OS
Version of Ansible
Version of Python
Python 3.10.13
Version of Kubespray (commit)
774d824
Network plugin used
calico
Full inventory with variables
Command used to invoke ansible
ansible-playbook -i /conf/host.yml --become-user root -e "@/conf/group_vars.yml" --private-key /auth/ssh-privatekey /kubespray/scale.yml --limit=dev1-w-10-64-80-147 --forks=10
Output of ansible run
~
Anything else we need to know
No response
The text was updated successfully, but these errors were encountered: