New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MiniKube enters NodeNotReady when the K8S worker node is a newly provisioned node by cluster autoscaler #46
Comments
Does minikube logs give more clues? |
I did not capture those logs. next time this happens will capture them |
Logs from
|
Minikube node status - might be related to Ran
|
more logs from minikube. looks like issues with kibe-dns
|
After more investigation this seems to be an issue with Kubelet not having enough capacity to start up properly if there is one big pod (with multiple containers) that build docker images, hosts kind container and run integration tests as well Splitting kind into its own pod seems to give kubelet enough capacity to startup properly. Tested this by triggering 5 or more concurrent builds and all the builds were successful |
Our builds intermittently fail with the below error. For some reason the minikube node enters
NodeNotReady
(and never recovers) state and cause the workloads to be not scheduled and the build fail. On further investigation this behaviour is more prominent when the kind workload is scheduled on new provisioned worker node (ec2 instance
). Any hints on how we can avoid this error?The text was updated successfully, but these errors were encountered: