Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite} #31407

Closed
k8s-github-robot opened this issue Aug 25, 2016 · 3 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/2008/

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:299
Expected error:
    <*errors.errorString | 0xc82101d300>: {
        s: "error while scaling RC daemonrestart10-ecda6c20-6a4e-11e6-ad43-0242ac110006 to 15 replicas: timed out waiting for \"daemonrestart10-ecda6c20-6a4e-11e6-ad43-0242ac110006\" to be synced",
    }
    error while scaling RC daemonrestart10-ecda6c20-6a4e-11e6-ad43-0242ac110006 to 15 replicas: timed out waiting for "daemonrestart10-ecda6c20-6a4e-11e6-ad43-0242ac110006" to be synced
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:298
@k8s-github-robot k8s-github-robot added priority/backlog Higher priority than priority/awaiting-more-evidence. kind/flake Categorizes issue or PR as related to a flaky test. labels Aug 25, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/2009/

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:299
Expected error:
    <*errors.errorString | 0xc820c1a1a0>: {
        s: "error while scaling RC daemonrestart10-a286c5c3-6a6e-11e6-a125-0242ac110006 to 15 replicas: timed out waiting for \"daemonrestart10-a286c5c3-6a6e-11e6-a125-0242ac110006\" to be synced",
    }
    error while scaling RC daemonrestart10-a286c5c3-6a6e-11e6-a125-0242ac110006 to 15 replicas: timed out waiting for "daemonrestart10-a286c5c3-6a6e-11e6-a125-0242ac110006" to be synced
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:298

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/2010/

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:299
Expected error:
    <*errors.errorString | 0xc820f100f0>: {
        s: "error while scaling RC daemonrestart10-8e6b5884-6a8c-11e6-9e0d-0242ac110006 to 15 replicas: timed out waiting for \"daemonrestart10-8e6b5884-6a8c-11e6-9e0d-0242ac110006\" to be synced",
    }
    error while scaling RC daemonrestart10-8e6b5884-6a8c-11e6-9e0d-0242ac110006 to 15 replicas: timed out waiting for "daemonrestart10-8e6b5884-6a8c-11e6-9e0d-0242ac110006" to be synced
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:298

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Aug 25, 2016
@lavalamp
Copy link
Member

This happened a few times in a row and then stopped. Probably something was fixed. Will investigate more carefully if it happens again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

3 participants