New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add timeout value to configure how long kubeadm init takes to timeout #1168
Comments
i agree with this. /kind feature |
@timothysc opinions? |
I'd love to add this. +1. |
This has been requested a number of times, but the use case was on the other end of the spectrum. @fabriziopandini could we roll this into the v1beta1 component(apiserver) migration work? |
/assign @soggiest |
@timothysc ok |
timeout for the api server was added in the v1beta1 config: |
FEATURE REQUEST
Add a KubeADM configuration option or CLI flag that determines how long
kubeadm init
takes to timeout. Cloud-based installations have a chance to fail if the API Server's load balancer takes too long to recognize an instance is listening on the appropriate port. This can potentially be mitigated by modifying the load balancer's health checks. However, an option to extend thekubeadm init
would be helpful.Versions
kubeadm version (use
kubeadm version
): 1.11.3Environment:
kubectl version
): 1.11.3uname -a
): kubeadm does not copy the cloud configuration file to nodes #77-Ubuntu SMPWhat happened?
KubeADM init runs were consistently failing on AWS with the following error: "timed out waiting for condition"
After each KubeADM run the Kubernetes control plane was working as expected. During an investigation into what was causing this error I observed that the AWS ELB would become active after KubeADM had timed out. Modifying the ELB health checks to be quicker alleviated the issue.
From a UX stand point I should be able to set the init timeout from a config or cli flag rather than worry about a race against time with my ELB.
What you expected to happen?
KubeADM init would wait long enough for the control plane to come up.
How to reproduce it (as minimally and precisely as possible)?
Region:
ap-southeast-1 (Singapore)
Configure an AWS EC2 Instance with:
AMI: Ubuntu 16.04 LTS
Size: m5.xlarge
Storage: 50GB
Tag: kubernetes.io/cluster/kubernetes: owned
Configure an ELB with:
Standard Health Checks Intervals, Health check against TCP:6443
TCP Passthrough 443 -> TCP 6443
Run
kubeadm init
with a kubeadm config that points to the ELB as thecontrolPlaneEndpoint
The text was updated successfully, but these errors were encountered: