-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[upgrade test failure] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover #50797
Comments
kubernetes/test-infra#4086 (comment) should fix this |
@krzyzacy - FYI |
/assign |
The tests has wrong /unassign |
This upgrade test is much healthier since the fix went in and those failures look related to overall test environment issues. Closing. Will open another issue if I identify anything specific about this test. Thanks @krzyzacy ! |
Reopening since this is still failing on on our or upgrade tests. https://k8s-testgrid.appspot.com/master-upgrade#gke-gci-1.7-gci-master-upgrade-cluster-new We're blocked on that test not producing logs though (#52578). |
[MILESTONENOTIFIER] Milestone Labels Complete Issue label settings:
|
The expected number of nodes "-1" was the value of |
It was like that before my PR, @zmerlynn seems (those test were passing after my PR got in, I think something regressed here) |
/assign |
Before #4500 the entire upgrade was failing (if you scroll to the right of the testgrid page, that fixed So this test was passing in both https://k8s-testgrid.appspot.com/master-upgrade#gke-gci-1.7-gci-master-upgrade-cluster and https://k8s-testgrid.appspot.com/master-upgrade#gke-gci-1.7-gci-master-upgrade-master, @crimsonfaith91 can you double check that if there's any difference (need to be cherrypicked back to 1.7) between 1.7 and 1.8? |
fixed by kubernetes/test-infra#4617 |
Thanks, @krzyzacy! |
Opening this since this seems slightly different from #46651
/cc @kubernetes/sig-node-bugs
This tests has been consistently failing on a lot of the upgrade tests:
https://k8s-testgrid.appspot.com/master-upgrade#gke-cvm-1.7-gci-master-upgrade-master
https://k8s-testgrid.appspot.com/master-upgrade#gke-cvm-1.7-gci-master-upgrade-cluster
https://k8s-testgrid.appspot.com/master-upgrade#gke-gci-1.7-cvm-master-upgrade-master
https://k8s-testgrid.appspot.com/master-upgrade#gke-gci-1.7-cvm-master-upgrade-cluster
https://k8s-testgrid.appspot.com/master-upgrade#gke-gci-1.7-gci-master-upgrade-master
https://k8s-testgrid.appspot.com/master-upgrade#gke-gci-1.7-gci-master-upgrade-cluster
What's really weird is the error message most of them spit out
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-cvm-new-cvm-master-upgrade-cluster/128#k8sio-restart-disruptive-should-restart-all-nodes-and-ensure-all-nodes-and-pods-recover
Really hard to be clear who owns this tests, going to tag sig-node until there's further evidence otherwise.
cc @kubernetes/kubernetes-release-managers @mbohlool
The text was updated successfully, but these errors were encountered: