-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Kubemark] Failures in master kubelet trying to start pods #68190
Comments
Changed title, as it seems like kubelet is failing to start all master pods not just apiserver). |
@shyamjvs possibly related to moby/moby#31614 ? |
For what its worth, I think I am seeing this as well, or at least something similar: Error
Using jenkins/jnlp-slave configured manually from jenkins UI. This error appears when kubernetes attempts to create the pod. Versions
BonusInterestingly, if I add the container to the cluster manually configured for a static jenkins node it comes up all smiles: jnlp-slave.yaml:
I am happy to provide more info if this is indeed related. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Seems obsolete now. |
@bclouser I have a similar issue, did you find any solution? Thanks! |
I encountered this error when I mis-configured a deployment to have
Note the incorrect |
The same happens if you also specify memory limits too low. I tried starting with a 10Mi memory limits and i got the same errors. |
We recently started observing flaky failures in couple of kubemark jobs:
E.g failed run - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-kubemark-500-gce/16634
The reason seems to be that kubelet was failing to start kube-apiserver pod with such errors continuously:
I'll try digging up a bit, but @yujuhong @mtaufen - do you have any leads on why this might be happening?
cc @kubernetes/sig-scalability-bugs @wojtek-t
The text was updated successfully, but these errors were encountered: