Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite} #26929

Closed
k8s-github-robot opened this issue Jun 7, 2016 · 7 comments
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/1441/

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Expected error:
    <*errors.errorString | 0xc8208c0520>: {
        s: "couldn't find 28 pods within 5m0s; last error: expected to find 28 pods but found only 27",
    }
    couldn't find 28 pods within 5m0s; last error: expected to find 28 pods but found only 27
not to have occurred

Previous issues for this test: #26744

@k8s-github-robot k8s-github-robot added the kind/flake Categorizes issue or PR as related to a flaky test. label Jun 7, 2016
@andyzheng0831
Copy link

@fejta is this an issue on node side? If so, this is not related with GCI. Currently only the master is on GCI and nodes are still on ContainerVM.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/1519/

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Jun 18 09:55:51.982: At least one pod wasn't running and ready or succeeded at test start.

@k8s-github-robot k8s-github-robot added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Jun 18, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/1589/

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Expected error:
    <*errors.errorString | 0xc8215f6880>: {
        s: "couldn't find 29 pods within 5m0s; last error: expected to find 29 pods but found only 27",
    }
    couldn't find 29 pods within 5m0s; last error: expected to find 29 pods but found only 27
not to have occurred

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Jun 28, 2016
@andyzheng0831 andyzheng0831 removed their assignment Jul 14, 2016
@andyzheng0831
Copy link

Unassign myself as I am leaving this project.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13387/

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Expected error:
    <*errors.errorString | 0xc82213a780>: {
        s: "couldn't find 10 pods within 5m0s; last error: expected to find 10 pods but found only 11",
    }
    couldn't find 10 pods within 5m0s; last error: expected to find 10 pods but found only 11
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-serial/1857/

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Expected error:
    <*errors.errorString | 0xc8214b60d0>: {
        s: "couldn't find 10 pods within 5m0s; last error: expected to find 10 pods but found only 11",
    }
    couldn't find 10 pods within 5m0s; last error: expected to find 10 pods but found only 11
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-serial/1896/

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Expected error:
    <*errors.errorString | 0xc821450110>: {
        s: "couldn't find 0 pods within 5m0s; last error: expected to find 0 pods but found only 11",
    }
    couldn't find 0 pods within 5m0s; last error: expected to find 0 pods but found only 11
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

3 participants