Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-node-kubelet-serial: broken test run #38050

Closed
k8s-github-robot opened this issue Dec 3, 2016 · 43 comments
Closed

ci-kubernetes-node-kubelet-serial: broken test run #38050

k8s-github-robot opened this issue Dec 3, 2016 · 43 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/17/

Multiple broken tests:

Failed: [k8s.io] InodeEviction [Slow] [Serial] [Disruptive] when we run containers that should cause Disk Pressure due to Inodes should eventually see Disk Pressure due to Inodes, and then evict all of the correct pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:206
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc8208ea8e0>: {
        s: "Condition: Disk Pressure due to Inodes not encountered",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:151

Issues about this test specifically: #37983

Failed: [k8s.io] Kubelet Container Manager [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:138
Timed out after 120.094s.
Expected
    <*errors.errorString | 0xc82116ad80>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:136

Issues about this test specifically: #33232

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0 interval {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc82019cfc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30523 #32022 #33291 #33547

Failed: [k8s.io] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:55
Expected error:
    <*errors.errorString | 0xc8201570a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30525 #31835

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc8201b2e30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30878 #31743 #31877 #32044

Previous issues for this suite: #37590

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 3, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/18/

Multiple broken tests:

Failed: [k8s.io] InodeEviction [Slow] [Serial] [Disruptive] when we run containers that should cause Disk Pressure due to Inodes should eventually see Disk Pressure due to Inodes, and then evict all of the correct pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:206
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc8204999c0>: {
        s: "Condition: Disk Pressure due to Inodes not encountered",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:151

Issues about this test specifically: #37983

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0 interval {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc8201d5160>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30523 #32022 #33291 #33547

Failed: [k8s.io] MemoryEviction [Slow] [Serial] [Disruptive] when there is memory pressure should evict pods in the correct order (besteffort first, then burstable, then guaranteed) {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_eviction_test.go:216
Timed out after 3600.000s.
Expected
    <*errors.errorString | 0xc820bec8c0>: {
        s: "besteffort and burstable have not yet both been evicted.",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_eviction_test.go:214

Issues about this test specifically: #32433

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc8201d5160>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30878 #31743 #31877 #32044

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/19/

Multiple broken tests:

Failed: [k8s.io] MemoryEviction [Slow] [Serial] [Disruptive] when there is memory pressure should evict pods in the correct order (besteffort first, then burstable, then guaranteed) {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_eviction_test.go:216
Expected error:
    <*errors.errorString | 0xc8203caef0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32433

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc82029d120>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:55
Expected error:
    <*errors.errorString | 0xc82029d0f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30525 #31835

Failed: [k8s.io] Kubelet Container Manager [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:138
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc821074f70>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:136

Issues about this test specifically: #33232

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0 interval {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc82029d120>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30523 #32022 #33291 #33547

Failed: [k8s.io] InodeEviction [Slow] [Serial] [Disruptive] when we run containers that should cause Disk Pressure due to Inodes should eventually see Disk Pressure due to Inodes, and then evict all of the correct pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:243
Failed after 10.071s.
Expected
    <bool>: true
to be false
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:221

Issues about this test specifically: #37983

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/21/

Multiple broken tests:

Failed: [k8s.io] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:55
Expected error:
    <*errors.errorString | 0xc820252f00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30525 #31835

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0 interval {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc82029cf30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30523 #32022 #33291 #33547

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:169
Expected error:
    <*errors.errorString | 0xc82029ef80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36903

Failed: [k8s.io] InodeEviction [Slow] [Serial] [Disruptive] when we run containers that should cause Disk Pressure due to Inodes should eventually see Disk Pressure due to Inodes, and then evict all of the correct pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:206
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc8216bec00>: {
        s: "Condition: Disk Pressure due to Inodes not encountered",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:151

Issues about this test specifically: #37983

Failed: [k8s.io] Kubelet Container Manager [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:138
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc820d636f0>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:136

Issues about this test specifically: #33232

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/22/

Multiple broken tests:

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0 interval {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc82029cf80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30523 #32022 #33291 #33547

Failed: [k8s.io] Kubelet Container Manager [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:138
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc821104610>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:136

Issues about this test specifically: #33232

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc82029ef70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:55
Expected error:
    <*errors.errorString | 0xc82029ef70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30525 #31835

Failed: [k8s.io] InodeEviction [Slow] [Serial] [Disruptive] when we run containers that should cause Disk Pressure due to Inodes should eventually see Disk Pressure due to Inodes, and then evict all of the correct pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:206
normal-memory-usage-pod pod failed (and shouldn't have failed)
Expected
    <v1.PodPhase>: Failed
not to equal
    <v1.PodPhase>: Failed
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:193

Issues about this test specifically: #37983

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/23/

Multiple broken tests:

Failed: [k8s.io] Kubelet Container Manager [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:138
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc8209473a0>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:136

Issues about this test specifically: #33232

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0 interval {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc8202c1140>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #30523 #32022 #33291 #33547

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc82029ef60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] InodeEviction [Slow] [Serial] [Disruptive] when we run containers that should cause Disk Pressure due to Inodes should eventually see Disk Pressure due to Inodes, and then evict all of the correct pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:252
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc820ec0f20>: {
        s: "Condition: Disk Pressure due to Inodes not encountered",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:154

Issues about this test specifically: #37983

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/25/

Multiple broken tests:

Failed: [k8s.io] InodeEviction [Slow] [Serial] [Disruptive] when we run containers that should cause Disk Pressure due to Inodes should eventually see Disk Pressure due to Inodes, and then evict all of the correct pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:252
normal-memory-usage-pod pod failed (and shouldn't have failed)
Expected
    <v1.PodPhase>: Failed
not to equal
    <v1.PodPhase>: Failed
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:196

Issues about this test specifically: #37983

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc82029f0d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] Kubelet Container Manager [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:138
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc820c4c790>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:136

Issues about this test specifically: #33232

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0 interval {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc82029f0d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #30523 #32022 #33291 #33547

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/26/

Multiple broken tests:

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc820262c30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] InodeEviction [Slow] [Serial] [Disruptive] when we run containers that should cause Disk Pressure due to Inodes should eventually see Disk Pressure due to Inodes, and then evict all of the correct pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:252
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc820fbaa20>: {
        s: "Condition: Disk Pressure due to Inodes not encountered",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:154

Issues about this test specifically: #37983

Failed: [k8s.io] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:55
Expected error:
    <*errors.errorString | 0xc820262c30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #30525 #31835

Failed: [k8s.io] Kubelet Container Manager [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:138
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc8201b2f70>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:136

Issues about this test specifically: #33232

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0 interval {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc820262c30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #30523 #32022 #33291 #33547

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/27/

Multiple broken tests:

Failed: [k8s.io] Kubelet Container Manager [Serial] Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:138
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc820248060>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:136

Issues about this test specifically: #33232

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc820283280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:55
Expected error:
    <*errors.errorString | 0xc820283280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #30525 #31835

Failed: [k8s.io] InodeEviction [Slow] [Serial] [Disruptive] when we run containers that should cause Disk Pressure due to Inodes should eventually see Disk Pressure due to Inodes, and then evict all of the correct pods {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:252
Timed out after 600.000s.
Expected
    <*errors.errorString | 0xc8210c5d10>: {
        s: "Condition: Disk Pressure due to Inodes not encountered",
    }
to be nil
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/inode_eviction_test.go:154

Issues about this test specifically: #37983

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0 interval {E2eNode Suite}

/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:67
Expected error:
    <*errors.errorString | 0xc820283280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/ci-kubernetes-node-kubelet-serial/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #30523 #32022 #33291

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/314/
Multiple broken tests:

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:246
Mar  2 01:34:30.265: Memory usage exceeding limits:
 node tmp-node-e2e-2a3bf4c2-coreos-alpha-1122-0-0-v20160727:
 container "kubelet": expected RSS memory (MB) < 104857600; got 106180608
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:247

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201748d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040d3b0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201748d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201748d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:109
Mar  2 01:30:44.289: Memory usage exceeding limits:
 node tmp-node-e2e-2a3bf4c2-coreos-alpha-1122-0-0-v20160727:
 container "kubelet": expected RSS memory (MB) < 104857600; got 106618880
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:247

Issues about this test specifically: #39582

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/315/
Multiple broken tests:

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040dac0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-two" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013d470>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420174660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013d470>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/317/
Multiple broken tests:

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040cb40>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d310>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013d340>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d310>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/318/
Multiple broken tests:

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040cfc0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015e940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-two" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015e940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f460>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/319/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f470>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040dba0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f470>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f470>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/320/
Multiple broken tests:

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040d4c0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:109
Mar  2 13:09:56.899: Memory usage exceeding limits:
 node tmp-node-e2e-09d00a2a-coreos-alpha-1122-0-0-v20160727:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105857024
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:247

Issues about this test specifically: #39582

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f3f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-two" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f3f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f3f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/321/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015c900>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015e8b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc4203e8870>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015e8b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/322/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015c930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015c930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040bfb0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015c930>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/323/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015cc90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040d550>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/324/
Multiple broken tests:

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:246
Expected error:
    <*errors.errorString | 0xc420c08990>: {
        s: "too high pod startup latency 99th percentile: 10.022396474s",
    }
    too high pod startup latency 99th percentile: 10.022396474s
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:558

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] Restart [Serial] [Slow] [Disruptive] Docker Daemon Network should recover from ip leak {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/restart_test.go:119
Mar  2 21:52:24.824: Failed to start 50 pods, cannot test that restarting docker doesn't leak IPs
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/restart_test.go:91

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040cf10>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/325/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013d6b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013d6b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040f580>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013d6b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/326/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201748a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201748a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201748a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc420408c90>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/327/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f360>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f360>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f360>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040da50>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/328/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040d520>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/329/
Multiple broken tests:

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc4203ea0e0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f480>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d410>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013fb70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/330/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f5d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040cf50>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] Node Container Manager [Serial] Validate Node Allocatable set's up the node and runs the test {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_container_manager_test.go:62
Expected error:
    <*errors.errorString | 0xc420fe1b80>: {
        s: "Unexpected cpu allocatable value exposed by the node. Expected: 800m, got: {{1 0} {<nil>} 1 DecimalSI}, capacity: {{1 0} {<nil>} 1 DecimalSI}",
    }
    Unexpected cpu allocatable value exposed by the node. Expected: 800m, got: {{1 0} {<nil>} 1 DecimalSI}, capacity: {{1 0} {<nil>} 1 DecimalSI}
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_container_manager_test.go:61

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f5d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/331/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015b420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040f0a0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015b420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-two" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015b420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/332/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013db60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:246
Mar  3 12:55:18.241: CPU usage exceeding limits:
 node tmp-node-e2e-969d5f48-e2e-node-ubuntu-trusty-docker10-v2-image:
 container "kubelet": expected 50th% usage < 0.300; got 0.310
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:281

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013db60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc4203eca60>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/333/
Multiple broken tests:

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:109
Mar  3 15:09:15.505: Memory usage exceeding limits:
 node tmp-node-e2e-995e1ceb-coreos-alpha-1122-0-0-v20160727:
 container "kubelet": expected RSS memory (MB) < 104857600; got 117944320
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:247

Issues about this test specifically: #39582

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:246
Mar  3 15:13:43.750: Memory usage exceeding limits:
 node tmp-node-e2e-995e1ceb-coreos-alpha-1122-0-0-v20160727:
 container "kubelet": expected RSS memory (MB) < 104857600; got 121688064
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:247

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d320>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d320>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040b340>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-three" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d320>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/334/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f0c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f5a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-two" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015ca40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc42040b4c0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:109
Mar  3 16:56:27.368: Memory usage exceeding limits:
 node tmp-node-e2e-9b135721-coreos-alpha-1122-0-0-v20160727:
 container "kubelet": expected RSS memory (MB) < 104857600; got 117207040
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:247

Issues about this test specifically: #39582

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/335/
Multiple broken tests:

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc420411680>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013fb80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201748b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013fb80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/336/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-three" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015c960>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015c960>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013d540>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] CriticalPod [Serial] [Disruptive] when we need to admit a critical pod should be able to create and delete a critical pod {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:94
Expected error:
    <*errors.errorString | 0xc4204098e0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Issues about this test specifically: #42239

@calebamiles calebamiles modified the milestone: v1.6 Mar 3, 2017
@dashpole
Copy link
Contributor

dashpole commented Mar 4, 2017

@derekwaynecarr @vishh
Changes
#41644 seems the likely candidate. Ill look at this on monday if no one else gets to it before that.

@dashpole
Copy link
Contributor

dashpole commented Mar 4, 2017

@apelisse please assign this to me.

@vishh vishh assigned dashpole and unassigned apelisse Mar 4, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/339/
Multiple broken tests:

Failed: [k8s.io] MemoryEviction [Slow] [Serial] [Disruptive] when there is memory pressure should evict pods in the correct order (besteffort first, then burstable, then guaranteed) {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_eviction_test.go:133
Expected error:
    <*errors.errorString | 0xc4203ea820>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-two" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/350/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420174c90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:109
Mar  5 01:57:30.734: Memory usage exceeding limits:
 node tmp-node-e2e-431b78d7-e2e-node-ubuntu-trusty-docker10-v2-image:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105619456
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:247

Issues about this test specifically: #39582

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420174c90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-two" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420174c90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/352/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201947b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f440>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:246
Expected error:
    <*errors.errorString | 0xc420ad2510>: {
        s: "too high pod startup latency 99th percentile: 10.012906671s",
    }
    too high pod startup latency 99th percentile: 10.012906671s
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:558

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f440>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/362/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-three" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f6e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f460>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:246
Mar  6 01:49:47.523: Memory usage exceeding limits:
 node tmp-node-e2e-08c3f94b-coreos-alpha-1122-0-0-v20160727:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105873408
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:247

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f6e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/370/
Multiple broken tests:

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:246
Expected error:
    <*errors.errorString | 0xc421576b20>: {
        s: "too high pod startup latency 99th percentile: 10.011281976s",
    }
    too high pod startup latency 99th percentile: 10.011281976s
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:558

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201947b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015e8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-two" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015e8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/374/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-three" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015f450>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201748b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] Node Container Manager [Serial] Validate Node Allocatable set's up the node and runs the test {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_container_manager_test.go:62
Expected error:
    <*errors.errorString | 0xc421663360>: {
        s: "Unexpected cpu allocatable value exposed by the node. Expected: 800m, got: {{1 0} {<nil>} 1 DecimalSI}, capacity: {{1 0} {<nil>} 1 DecimalSI}",
    }
    Unexpected cpu allocatable value exposed by the node. Expected: 800m, got: {{1 0} {<nil>} 1 DecimalSI}, capacity: {{1 0} {<nil>} 1 DecimalSI}
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_container_manager_test.go:61

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201748b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/375/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015c870>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015c870>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015c870>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:246
Expected error:
    <*errors.errorString | 0xc421319200>: {
        s: "too high pod startup latency 99th percentile: 10.009986113s",
    }
    too high pod startup latency 99th percentile: 10.009986113s
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:558

Issues about this test specifically: #30878 #31743 #31877 #32044

@dashpole
Copy link
Contributor

dashpole commented Mar 7, 2017

This may simply be because this test uses a different deletion timeout from the default timeout... I am testing with using the default.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/382/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-two" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f6b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

Failed: [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:246
Mar  7 19:21:06.196: Memory usage exceeding limits:
 node tmp-node-e2e-cd48056b-coreos-alpha-1122-0-0-v20160727:
 container "kubelet": expected RSS memory (MB) < 104857600; got 114987008
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:247

Issues about this test specifically: #30878 #31743 #31877 #32044

Failed: [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:109
Mar  7 19:22:59.675: Memory usage exceeding limits:
 node tmp-node-e2e-cd48056b-coreos-alpha-1122-0-0-v20160727:
 container "kubelet": expected RSS memory (MB) < 104857600; got 122716160
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:247

Issues about this test specifically: #39582

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013f6b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42015d3e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/390/
Multiple broken tests:

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-one-pod" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201414d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-one-container-no-restarts" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4201414d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Failed: [k8s.io] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:90
Mar  8 12:41:28.416: Memory usage exceeding limits:
 node tmp-node-e2e-62c3b723-coreos-alpha-1122-0-0-v20160727:
 container "kubelet": expected RSS memory (MB) < 104857600; got 106672128
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:247

Issues about this test specifically: #30525 #31835 #42110

Failed: [k8s.io] GarbageCollect [Serial] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:255
wait for pod "gc-test-pod-many-containers-many-restarts-two" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42013d610>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:146

Issues about this test specifically: #36903

@dashpole
Copy link
Contributor

Closing since resource-usage test flakes are covered by #42110, and GarbageCollect flakes are fixed via #42779.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

5 participants