-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite} #38552
Labels
kind/flake
Categorizes issue or PR as related to a flaky test.
sig/node
Categorizes an issue or PR as relevant to SIG Node.
Milestone
Comments
k8s-github-robot
added
kind/flake
Categorizes issue or PR as related to a flaky test.
priority/P2
labels
Dec 10, 2016
Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
|
This was referenced Dec 13, 2016
This was referenced Dec 16, 2016
Closed
Closed
Closed
This was referenced Dec 23, 2016
This was referenced Jan 12, 2017
This was referenced Jan 29, 2017
calebamiles
added
the
sig/node
Categorizes an issue or PR as relevant to SIG Node.
label
Feb 27, 2017
no occurrence in 2017, closing per 1.6 guidance. |
This was referenced Mar 9, 2017
This was referenced Mar 16, 2017
This was referenced Mar 26, 2017
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
kind/flake
Categorizes issue or PR as related to a flaky test.
sig/node
Categorizes an issue or PR as relevant to SIG Node.
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-serial-release-1.5/138/
Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
Previous issues for this test: #26744 #26929
The text was updated successfully, but these errors were encountered: