-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite} #29954
Comments
Autoassigned...to me? @lavalamp |
Check the test owners csv file & find a new owner if you don't think you On Tue, Aug 2, 2016 at 3:48 PM, David McMahon notifications@github.com
|
Looking at the test grid, I see no other examples of this flaking. |
Well, that's why it's only P2 :) |
So I know of at least 2-3 Docker related hangs / failures that might result in this symptom. Will try to take a peek at those. |
[FLAKE-PING] @smarterclayton This flaky-test issue would love to have more attention... |
[FLAKE-PING] @smarterclayton This flaky-test issue would love to have more attention. |
2 similar comments
[FLAKE-PING] @smarterclayton This flaky-test issue would love to have more attention. |
[FLAKE-PING] @smarterclayton This flaky-test issue would love to have more attention. |
The last run was due to a GKE break (the test didn't have access to make calls to service accounts). I think the one before that was #32302. Going to close because I haven't seen this recur. We have #29972 to cover that. Note that it is possible for Docker to race and report invalid info (more likely outside of our tests, where we don't control docker version as tightly). |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/12460/
Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}
The text was updated successfully, but these errors were encountered: