Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-node-kubelet-serial: broken test run #37590

Closed
k8s-github-robot opened this issue Nov 29, 2016 · 2 comments
Closed

ci-kubernetes-node-kubelet-serial: broken test run #37590

k8s-github-robot opened this issue Nov 29, 2016 · 2 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/12/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Nov 29, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/14/

Multiple broken tests:

Failed: [k8s.io] Kubelet Container Manager [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:138
Timed out after 120.023s.
Expected
    <*errors.errorString | 0xc8210b5aa0>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:136

Issues about this test specifically: #33232

Failed: [k8s.io] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:55
Expected error:
    <*errors.errorString | 0xc8201b4da0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30525 #31835

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0 interval {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc8201b4da0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30523 #32022 #33291 #33547

Failed: [k8s.io] InodeEviction [Slow] [Serial] [Disruptive] when we run containers that should cause Disk Pressure due to Inodes should eventually see Disk Pressure due to Inodes, and then evict all of the correct pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:206
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc821145fc0>: {
        s: "Condition: Disk Pressure due to Inodes not encountered",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:151

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:254
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc820d94b10>: {
        s: "[/k8s_gc-test-container-one-container-no-restarts0.a26b483f_gc-test-pod-one-container-no-restarts_e2e-tests-garbage-collect-test-w0frz_89be9a86-b72b-11e6-836e-42010a800025_1cd0080e] containers still remain",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:248

Failed: [k8s.io] Restart [Serial] [Slow] [Disruptive] Docker Daemon Network should recover from ip leak {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/restart_test.go:118
Nov 30 18:39:59.843: Failed to start 50 pods, cannot test that restarting docker doesn't leak IPs
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/restart_test.go:90

Failed: [k8s.io] MemoryEviction [Slow] [Serial] [Disruptive] when there is memory pressure should evict pods in the correct order (besteffort first, then burstable, then guaranteed) {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_eviction_test.go:216
Timed out after 3600.000s.
Expected
    <*errors.errorString | 0xc820a1b0f0>: {
        s: "besteffort and burstable have not yet both been evicted.",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_eviction_test.go:214

Issues about this test specifically: #32433

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/15/

Multiple broken tests:

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0 interval {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc82019cff0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30523 #32022 #33291 #33547

Failed: [k8s.io] InodeEviction [Slow] [Serial] [Disruptive] when we run containers that should cause Disk Pressure due to Inodes should eventually see Disk Pressure due to Inodes, and then evict all of the correct pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:206
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc8209a54a0>: {
        s: "Condition: Disk Pressure due to Inodes not encountered",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:151

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:254
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc8210514c0>: {
        s: "[/k8s_gc-test-container-one-container-no-restarts0.a26b483f_gc-test-pod-one-container-no-restarts_e2e-tests-garbage-collect-test-brj2v_3d9e9bff-b7f1-11e6-ad1c-42010a80001d_99beb630] containers still remain",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:248

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc82019cff0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30878 #31743 #31877 #32044

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants