New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update kube-master kubelet config #687
Comments
As discussed on Slack, I'd prefer to have the masters as part of cluster, but being unschedulable. This gives the advantage that you can use kubectl to analyze the cluster state. To implement this, we have 2 options:
kubeadm already does solution 2. and I'd actually prefer this as it gives admins the ability to force schedule stuff on masters with tolerations. There is however one problem with this: DaemonSets completely ignore taints and the unschedulable flag atm (v1.5.0-beta.2). This means they are scheduled on all registered nodes, including the masters. Honoring taints was once merged into kubernetes but then later reverted in kubernetes/kubernetes#31907. Not sure when it will be unreverted. We can probably ignore this for now and still use the unschedulable flag or taints. I would assume that most users only use DaemonSets for cluster wide admin stuff, like logging addons. In this case it would be ok and probably even desired. Also, DaemonSets seem to be still in Beta, so maybe we can accept some unexpected behavior here. |
We have two independent reproduces of the blocking issue for coreos-stable: static pods are missing on the env while conf files for them are in place |
Let's first remove the blocking factor, then think of a proper implementation |
Related #737 |
Could we now try to address the restarting static pods issue described here #737 (comment) ? |
If kube-master node is not in kube-node group, it needs to ensure the following options are not present:
--apiserver
--kubeconfig
--require-kubeconfig
--register-node
--register-unschedulable
This is in reference to discussion in issue kubernetes/kubernetes#38187 (comment)
The text was updated successfully, but these errors were encountered: