-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite} #28257
Labels
kind/flake
Categorizes issue or PR as related to a flaky test.
priority/backlog
Higher priority than priority/awaiting-more-evidence.
Comments
This is
duping into #27011 |
I see what happened. This cluster was launched at version 1.3.0-beta.2, which was the default in staging for testing. Then in the middle of the e2e run, the server-side default version was changed back to 1.2.5, which prompted the above rejection. We don't often do beta version testing in staging, but I'll add this project to the whitelist anyway. |
This was referenced Jul 19, 2016
Closed
This was referenced Sep 23, 2016
This was referenced Dec 25, 2016
Closed
Closed
This was referenced Jan 3, 2017
This was referenced Jan 22, 2017
This was referenced Jan 31, 2017
This was referenced Mar 17, 2017
This was referenced Mar 26, 2017
This was referenced Apr 24, 2017
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
kind/flake
Categorizes issue or PR as related to a flaky test.
priority/backlog
Higher priority than priority/awaiting-more-evidence.
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6072/
Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
The text was updated successfully, but these errors were encountered: