Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-test: broken test run #34117

Closed
k8s-github-robot opened this issue Oct 5, 2016 · 7 comments
Closed

kubernetes-e2e-gke-test: broken test run #34117

k8s-github-robot opened this issue Oct 5, 2016 · 7 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13760/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Oct 5, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/-1/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13772/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Oct 8, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13784/

Multiple broken tests:

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.errorString | 0xc820170ad0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #27503

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.errorString | 0xc820170ad0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #26194 #26338 #30345

Failed: DumpClusterLogs {e2e.go}

error running dump cluster logs: exit status 1

Issues about this test specifically: #33722

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
+gke-jenkins-e2e-27d4b7-pvc-24046be1-8fb8-11e6-93c2-42010af00068  us-central1-f  2        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 11 03:45:29.663: CPU usage exceeding limits:
 node gke-jenkins-e2e-default-pool-c26dc5e2-4ay9:
 container "kubelet": expected 50th% usage < 0.170; got 0.186, container "kubelet": expected 95th% usage < 0.220; got 0.257
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:187

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:135
Expected error:
    <*net.OpError | 0xc822890000>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\x82ӥ\xad",
            Port: 443,
            Zone: "",
        },
        Err: {},
    }
    dial tcp 130.211.165.173:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:98

Issues about this test specifically: #32185 #32372

@k8s-github-robot k8s-github-robot added priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Oct 11, 2016
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @rmmh

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13779/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13780/

Run so broken it didn't make JUnit output!

@rmmh
Copy link
Contributor

rmmh commented Oct 12, 2016

Mostly a dupe of #34446.

@rmmh rmmh closed this as completed Oct 12, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

No branches or pull requests

2 participants