-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Monitor Ready status from node #419
Monitor Ready status from node #419
Conversation
It might make sense to also make |
|
I wonder if it could check if the node is actually part of the cluster it's supposed to 🤔 DrainNode, UncordonNode and DeleteNode from the worker is only possible if the worker is working, maybe that is something that needs to be done separately for the I recall the original intention of that was to make it usable from cloud-init or such, a single command that can install the current host as a worker to an existing cluster with just a join token. I don't know if it needs to know how to remove one. |
GetClusterID between a controller and workers should probably be the same then. So that could be checked.
I guess that makes sense.
Isn't this kinda handled already with |
What a node can access with the permissions it gets from the kubelet.conf is quite limited.
I think both are valid uses, @jnummelin may remember the intended use-case. |
Was just an idea to check if the clusterId is the same for the node and controllers. That would guarantee the node is in the correct cluster. Just checked and kubelet.conf gives access to the required configmaps. |
Seems like a valid way to validate then. |
Monitor the ready status from the node itself instead of using the leader controller.
This prepares for worker-only installs where there is no controller in the k0sctl config file, but instead only a join token is used.
Fixes #327