Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-soak-gce-test: broken test run #38002

Closed
k8s-github-robot opened this issue Dec 3, 2016 · 186 comments
Closed

ci-kubernetes-soak-gce-test: broken test run #38002

k8s-github-robot opened this issue Dec 3, 2016 · 186 comments
Assignees
Labels
area/platform/gce area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/60/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 3, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/61/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/62/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/63/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/76/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc422f0e050>: {
        s: "3 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                   NODE                            PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nkube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nnode-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\n",
    }
    3 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                   NODE                            PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    kube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    node-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:94

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc421242730>: {
        s: "3 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                   NODE                            PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nkube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nnode-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\n",
    }
    3 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                   NODE                            PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    kube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    node-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:94

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc42129c390>: {
        s: "3 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                   NODE                            PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nkube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nnode-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\n",
    }
    3 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                   NODE                            PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    kube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    node-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:94

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc4214fcde0>: {
        s: "3 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                   NODE                            PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nkube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nnode-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\n",
    }
    3 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                   NODE                            PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    kube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    node-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:94

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:220
Dec  6 04:54:50.331: Pod did not stop running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:210

Issues about this test specifically: #26955

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc42215cc50>: {
        s: "3 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                   NODE                            PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nkube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nnode-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\n",
    }
    3 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                   NODE                            PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    kube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    node-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:94

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc4218ceb10>: {
        s: "3 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                   NODE                            PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nkube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nnode-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\n",
    }
    3 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                   NODE                            PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    kube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    node-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:94

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:215
Dec  6 07:09:11.000: Image puller didn't complete in 8m0s, not running resource usage test since the metrics might be adultrated
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:205

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc420aafc40>: {
        s: "3 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                   NODE                            PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nkube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nnode-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\n",
    }
    3 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                   NODE                            PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    kube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    node-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:94

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc4223c5e60>: {
        s: "3 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                   NODE                            PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nkube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nnode-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\n",
    }
    3 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                   NODE                            PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    kube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    node-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:94

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:287
Dec  6 04:51:24.849: Expected "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" from server, got ""
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:271

Issues about this test specifically: #27680

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc421803130>: {
        s: "3 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                   NODE                            PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nkube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nnode-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\n",
    }
    3 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                   NODE                            PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    kube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    node-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:94

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc4229a05d0>: {
        s: "3 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                   NODE                            PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nkube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\nnode-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]\n",
    }
    3 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                   NODE                            PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-bootstrap-e2e-minion-group-ccml bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:59:35 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    kube-proxy-bootstrap-e2e-minion-group-ccml            bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    node-problem-detector-v0.1-r97lg                      bootstrap-e2e-minion-group-ccml Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:25 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-06 04:58:32 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:94

Issues about this test specifically: #34223

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/206/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 15 15:16:25.334: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-rkfp:
 container "runtime": expected RSS memory (MB) < 314572800; got 321884160
node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 323649536
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 15 17:50:26.335: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-rkfp:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73863168
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/207/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 15 23:09:53.814: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-rkfp:
 container "runtime": expected RSS memory (MB) < 314572800; got 327069696
node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 326635520
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 15 23:35:30.997: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-rkfp:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74698752
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/208/

Multiple broken tests:

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 16 08:19:00.944: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-rkfp:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77242368
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 16 06:54:35.815: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-rkfp:
 container "runtime": expected RSS memory (MB) < 314572800; got 330637312
node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 329551872
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/212/

Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc4219bd880>: {
        s: "timeout waiting 5m0s for appropriate cluster size",
    }
    timeout waiting 5m0s for appropriate cluster size
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:118

Issues about this test specifically: #36457

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 17 09:20:29.457: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73924608
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 17 11:24:00.215: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 325632000
node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 337178624
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/213/

Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 17 19:27:59.532: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 17 14:12:04.018: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74252288
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 17 16:18:36.020: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 328675328
node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 338259968
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/214/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 18 01:09:45.158: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77094912
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 18 01:47:34.529: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 18 03:26:10.381: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 327163904
node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 337760256
node bootstrap-e2e-minion-group-y9se:
 container "runtime": expected RSS memory (MB) < 314572800; got 315961344
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/215/

Multiple broken tests:

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 18 05:17:50.881: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 339922944
node bootstrap-e2e-minion-group-y9se:
 container "runtime": expected RSS memory (MB) < 314572800; got 321646592
node bootstrap-e2e-minion-group-trh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 329584640
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 18 05:34:17.643: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 18 06:21:41.167: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78098432
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc421d7f950>: {
        s: "1 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                            PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-555hd bootstrap-e2e-minion-group-trh8 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 11:52:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-18 09:33:21 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 11:52:39 -0800 PST  }]\n",
    }
    1 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                            PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-555hd bootstrap-e2e-minion-group-trh8 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 11:52:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-18 09:33:21 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 11:52:39 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/216/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 18 14:18:55.714: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78434304
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 18 18:29:06.876: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc4228283d0>: {
        s: "1 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                            PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-555hd bootstrap-e2e-minion-group-trh8 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 11:52:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-18 18:44:13 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 11:52:39 -0800 PST  }]\n",
    }
    1 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                            PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-555hd bootstrap-e2e-minion-group-trh8 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 11:52:39 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-18 18:44:13 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-16 11:52:39 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Dec 18 13:14:08.907: Number of replicas has changed: expected 3, got 4
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:294

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 18 13:43:21.725: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 332492800
node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 341393408
node bootstrap-e2e-minion-group-y9se:
 container "runtime": expected RSS memory (MB) < 314572800; got 326086656
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/217/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 18 21:55:01.455: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78278656
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 18 23:37:12.975: Couldn't delete ns: "e2e-tests-sched-pred-wwvl1": namespace e2e-tests-sched-pred-wwvl1 was not deleted with limit: timed out waiting for the condition, pods remaining: 24, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-sched-pred-wwvl1 was not deleted with limit: timed out waiting for the condition, pods remaining: 24, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 19 01:52:39.323: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 339881984
node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 344694784
node bootstrap-e2e-minion-group-y9se:
 container "runtime": expected RSS memory (MB) < 314572800; got 327073792
node bootstrap-e2e-minion-group-yrv3:
 container "runtime": expected RSS memory (MB) < 314572800; got 317771776
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 18 19:45:28.715: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/218/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 19 08:18:41.453: Couldn't delete ns: "e2e-tests-sched-pred-nmxwm": namespace e2e-tests-sched-pred-nmxwm was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-sched-pred-nmxwm was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 19 03:39:06.643: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79462400
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 19 05:52:31.438: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 337530880
node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 344424448
node bootstrap-e2e-minion-group-y9se:
 container "runtime": expected RSS memory (MB) < 314572800; got 335179776
node bootstrap-e2e-minion-group-yrv3:
 container "runtime": expected RSS memory (MB) < 314572800; got 316203008
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 19 07:46:40.958: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/219/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 19 10:59:55.162: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-xjny:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73699328
node bootstrap-e2e-minion-group-trh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80805888
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 19 12:18:31.330: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 347156480
node bootstrap-e2e-minion-group-y9se:
 container "runtime": expected RSS memory (MB) < 314572800; got 337633280
node bootstrap-e2e-minion-group-yrv3:
 container "runtime": expected RSS memory (MB) < 314572800; got 320073728
node bootstrap-e2e-minion-group-trh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 340979712
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 19 15:07:34.690: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/220/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 19 20:41:04.568: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80842752
node bootstrap-e2e-minion-group-xjny:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73891840
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 19 22:02:56.384: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 20 00:05:27.072: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-y9se:
 container "runtime": expected RSS memory (MB) < 314572800; got 343146496
node bootstrap-e2e-minion-group-yrv3:
 container "runtime": expected RSS memory (MB) < 314572800; got 324583424
node bootstrap-e2e-minion-group-trh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 343597056
node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 353636352
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/221/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 20 04:17:05.594: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-trh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 345513984
node bootstrap-e2e-minion-group-xjny:
 container "runtime": expected RSS memory (MB) < 314572800; got 349945856
node bootstrap-e2e-minion-group-y9se:
 container "runtime": expected RSS memory (MB) < 314572800; got 344170496
node bootstrap-e2e-minion-group-yrv3:
 container "runtime": expected RSS memory (MB) < 314572800; got 328843264
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 20 05:35:19.875: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 20 01:29:38.101: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-xjny:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74592256
node bootstrap-e2e-minion-group-trh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81297408
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/230/
Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 22 16:39:25.933: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 22 20:48:33.783: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-spgl:
 container "kubelet": expected RSS memory (MB) < 73400320; got 76742656
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 22 13:59:42.771: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-svn3:
 container "runtime": expected RSS memory (MB) < 314572800; got 316792832
node bootstrap-e2e-minion-group-spgl:
 container "runtime": expected RSS memory (MB) < 314572800; got 319627264
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 22 14:47:42.717: Couldn't delete ns: "e2e-tests-sched-pred-cg6qx": namespace e2e-tests-sched-pred-cg6qx was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-sched-pred-cg6qx was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/231/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 23 00:35:05.773: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4212fb090>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                            PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-7dc5p bootstrap-e2e-minion-group-svn3 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-21 03:54:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 04:01:07 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-21 03:54:03 -0800 PST  }]\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                            PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-7dc5p bootstrap-e2e-minion-group-svn3 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-21 03:54:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 04:01:07 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-21 03:54:03 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:96

Issues about this test specifically: #36914

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 23 04:35:39.977: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-spgl:
 container "runtime": expected RSS memory (MB) < 314572800; got 330977280
node bootstrap-e2e-minion-group-svn3:
 container "runtime": expected RSS memory (MB) < 314572800; got 319598592
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 22 23:27:24.952: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-spgl:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77271040
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/232/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 23 05:11:04.613: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-svn3:
 container "runtime": expected RSS memory (MB) < 314572800; got 320319488
node bootstrap-e2e-minion-group-spgl:
 container "runtime": expected RSS memory (MB) < 314572800; got 328732672
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 23 07:00:18.223: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 23 11:47:40.591: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-spgl:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79532032
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4217c34c0>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                            PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-7dc5p bootstrap-e2e-minion-group-svn3 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-21 03:54:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 11:56:43 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-21 03:54:03 -0800 PST  }]\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                            PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-7dc5p bootstrap-e2e-minion-group-svn3 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-21 03:54:03 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 11:56:43 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-21 03:54:03 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:96

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/233/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 23 16:41:24.003: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-spgl:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80388096
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:73
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Dec 23 13:52:11.435: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-spgl:
 container "runtime": expected RSS memory (MB) < 314572800; got 335147008
node bootstrap-e2e-minion-group-svn3:
 container "runtime": expected RSS memory (MB) < 314572800; got 326995968
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Dec 23 15:39:59.095: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:280

Issues about this test specifically: #29647 #35627 #38293

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1621/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-vx63
not to equal
    <string>: bootstrap-e2e-minion-group-vx63
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
Apr 13 10:29:49.121: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 13 04:50:37.203: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-mrbz:
 container "kubelet": expected RSS memory (MB) < 125829120; got 127258624
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 13 06:40:37.477: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-mrbz:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88489984
node bootstrap-e2e-minion-group-vx63:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77750272
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 13 08:08:39.756: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1622/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 13 16:38:05.576: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-vx63:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79650816
node bootstrap-e2e-minion-group-mrbz:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92643328
node bootstrap-e2e-minion-group-pvrl:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78659584
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
Apr 13 12:23:40.381: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 13 15:26:55.332: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 13 16:15:33.851: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-mrbz:
 container "kubelet": expected RSS memory (MB) < 125829120; got 130093056
node bootstrap-e2e-minion-group-vx63:
 container "runtime": expected RSS memory (MB) < 314572800; got 323018752
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1623/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 14 03:37:33.074: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-pvrl:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83525632
node bootstrap-e2e-minion-group-vx63:
 container "kubelet": expected RSS memory (MB) < 73400320; got 84520960
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 13 22:31:13.301: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-mrbz:
 container "runtime": expected RSS memory (MB) < 314572800; got 317927424, container "kubelet": expected RSS memory (MB) < 125829120; got 136327168
node bootstrap-e2e-minion-group-vx63:
 container "runtime": expected RSS memory (MB) < 314572800; got 327172096
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 13 23:00:22.480: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
Apr 14 01:07:13.053: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-twcg
not to equal
    <string>: bootstrap-e2e-minion-group-twcg
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1624/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 14 04:30:34.724: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-vx63:
 container "runtime": expected RSS memory (MB) < 314572800; got 333508608
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 14 06:45:58.618: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 14 07:43:17.874: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-pvrl:
 container "kubelet": expected RSS memory (MB) < 73400320; got 86319104
node bootstrap-e2e-minion-group-vx63:
 container "kubelet": expected RSS memory (MB) < 73400320; got 86982656
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-twcg
not to equal
    <string>: bootstrap-e2e-minion-group-twcg
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1651/
Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
Apr 20 12:42:45.875: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 20 17:24:54.868: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-g4t6:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77926400
node bootstrap-e2e-minion-group-gl9h:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74829824
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-m99z
not to equal
    <string>: bootstrap-e2e-minion-group-m99z
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1652/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-s8lp
not to equal
    <string>: bootstrap-e2e-minion-group-s8lp
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 20 22:18:14.574: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-gl9h:
 container "runtime": expected RSS memory (MB) < 314572800; got 319918080
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 20 23:02:32.548: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-gl9h:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83836928
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1658/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:641
Apr 22 16:39:40.372: Unexpected kubectl exec output: %!(EXTRA string=I0422 23:39:40.301942      62 merged_client_builder.go:122] Using in-cluster configuration
I0422 23:39:40.302889      62 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/servergroups.json
I0422 23:39:40.303093      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.303202      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1/serverresources.json
I0422 23:39:40.303331      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.303449      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1/serverresources.json
I0422 23:39:40.303570      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.303773      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/autoscaling/v1/serverresources.json
I0422 23:39:40.303888      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v1/serverresources.json
I0422 23:39:40.304018      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v2alpha1/serverresources.json
I0422 23:39:40.304134      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1/serverresources.json
I0422 23:39:40.304269      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.304411      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.304545      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.304651      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/settings.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.304782      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/certificates.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.305056      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/extensions/v1beta1/serverresources.json
I0422 23:39:40.305179      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/policy/v1beta1/serverresources.json
I0422 23:39:40.305340      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apps/v1beta1/serverresources.json
I0422 23:39:40.305749      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/v1/serverresources.json
I0422 23:39:40.306064      62 merged_client_builder.go:122] Using in-cluster configuration
I0422 23:39:40.306607      62 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/servergroups.json
I0422 23:39:40.306724      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.306833      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1/serverresources.json
I0422 23:39:40.306927      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.307033      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1/serverresources.json
I0422 23:39:40.307140      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.307268      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/autoscaling/v1/serverresources.json
I0422 23:39:40.307389      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v1/serverresources.json
I0422 23:39:40.307506      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v2alpha1/serverresources.json
I0422 23:39:40.307611      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1/serverresources.json
I0422 23:39:40.307751      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.307883      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.308011      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.308107      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/settings.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.308242      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/certificates.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.308479      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/extensions/v1beta1/serverresources.json
I0422 23:39:40.308593      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/policy/v1beta1/serverresources.json
I0422 23:39:40.308737      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apps/v1beta1/serverresources.json
I0422 23:39:40.309197      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/v1/serverresources.json
I0422 23:39:40.309548      62 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/servergroups.json
I0422 23:39:40.309661      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.309782      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1/serverresources.json
I0422 23:39:40.309887      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.310000      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1/serverresources.json
I0422 23:39:40.310135      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.310261      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/autoscaling/v1/serverresources.json
I0422 23:39:40.310373      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v1/serverresources.json
I0422 23:39:40.310517      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v2alpha1/serverresources.json
I0422 23:39:40.310628      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1/serverresources.json
I0422 23:39:40.310762      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.310899      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.311045      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.311167      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/settings.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.311295      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/certificates.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.311541      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/extensions/v1beta1/serverresources.json
I0422 23:39:40.311665      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/policy/v1beta1/serverresources.json
I0422 23:39:40.311808      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apps/v1beta1/serverresources.json
I0422 23:39:40.312267      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/v1/serverresources.json
I0422 23:39:40.312543      62 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/servergroups.json
I0422 23:39:40.312652      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.312757      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1/serverresources.json
I0422 23:39:40.312860      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.312988      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1/serverresources.json
I0422 23:39:40.313147      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.313308      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/autoscaling/v1/serverresources.json
I0422 23:39:40.313457      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v1/serverresources.json
I0422 23:39:40.313597      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v2alpha1/serverresources.json
I0422 23:39:40.313721      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1/serverresources.json
I0422 23:39:40.313825      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.313965      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.314101      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.314201      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/settings.k8s.io/v1alpha1/serverresources.json
I0422 23:39:40.314344      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/certificates.k8s.io/v1beta1/serverresources.json
I0422 23:39:40.314589      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/extensions/v1beta1/serverresources.json
I0422 23:39:40.314717      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/policy/v1beta1/serverresources.json
I0422 23:39:40.314894      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apps/v1beta1/serverresources.json
I0422 23:39:40.315364      62 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/v1/serverresources.json
I0422 23:39:40.316436      62 merged_client_builder.go:122] Using in-cluster configuration
I0422 23:39:40.317001      62 merged_client_builder.go:122] Using in-cluster configuration
I0422 23:39:40.355788      62 round_trippers.go:417] GET https://10.0.0.1:443/api/v1/namespaces/invalid/pods 200 OK in 38 milliseconds
No resources found.
)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:639

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-svrw
not to equal
    <string>: bootstrap-e2e-minion-group-svrw
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 22 19:27:33.110: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-wnvg:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82026496
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1660/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 23 10:23:27.426: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-wnvg:
 container "runtime": expected RSS memory (MB) < 314572800; got 316145664
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-w8hf
not to equal
    <string>: bootstrap-e2e-minion-group-w8hf
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 23 12:34:27.060: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-xgrr:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80764928
node bootstrap-e2e-minion-group-wnvg:
 container "kubelet": expected RSS memory (MB) < 73400320; got 87818240
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1661/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 23 17:38:42.798: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-wnvg:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88100864
node bootstrap-e2e-minion-group-xgrr:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82255872
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 23 20:34:08.357: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-wnvg:
 container "kubelet": expected RSS memory (MB) < 125829120; got 127598592, container "runtime": expected RSS memory (MB) < 314572800; got 321343488
node bootstrap-e2e-minion-group-xgrr:
 container "runtime": expected RSS memory (MB) < 314572800; got 314707968
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-zdsz
not to equal
    <string>: bootstrap-e2e-minion-group-zdsz
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1662/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 23 22:12:39.916: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-wnvg:
 container "kubelet": expected RSS memory (MB) < 125829120; got 130990080, container "runtime": expected RSS memory (MB) < 314572800; got 317964288
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-zdsz
not to equal
    <string>: bootstrap-e2e-minion-group-zdsz
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 24 03:51:35.225: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-zdsz:
 container "kubelet": expected RSS memory (MB) < 73400320; got 76255232
node bootstrap-e2e-minion-group-wnvg:
 container "kubelet": expected RSS memory (MB) < 73400320; got 91721728
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1664/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 24 19:48:01.292: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-zdsz:
 container "runtime": expected RSS memory (MB) < 314572800; got 322691072
node bootstrap-e2e-minion-group-wnvg:
 container "kubelet": expected RSS memory (MB) < 125829120; got 130113536, container "runtime": expected RSS memory (MB) < 314572800; got 326152192
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 24 15:18:40.151: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-wnvg:
 container "kubelet": expected RSS memory (MB) < 73400320; got 94556160
node bootstrap-e2e-minion-group-xgrr:
 container "kubelet": expected RSS memory (MB) < 73400320; got 75362304
node bootstrap-e2e-minion-group-zdsz:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79495168
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1665/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 25 03:42:52.019: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-xgrr:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80252928
node bootstrap-e2e-minion-group-zdsz:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83025920
node bootstrap-e2e-minion-group-wnvg:
 container "kubelet": expected RSS memory (MB) < 73400320; got 95219712
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc421467a70>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                            PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-8vtkk bootstrap-e2e-minion-group-wnvg Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 20:00:45 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-25 03:58:15 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 20:00:45 -0700 PDT  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                            PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-8vtkk bootstrap-e2e-minion-group-wnvg Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 20:00:45 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-25 03:58:15 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 20:00:45 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #28071

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 25 00:26:41.129: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-wnvg:
 container "kubelet": expected RSS memory (MB) < 125829120; got 131055616, container "runtime": expected RSS memory (MB) < 314572800; got 329932800
node bootstrap-e2e-minion-group-zdsz:
 container "runtime": expected RSS memory (MB) < 314572800; got 325185536
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
Apr 25 02:17:50.947: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1666/
Multiple broken tests:

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:194
Apr 25 06:07:17.450: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:190

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/dns_autoscaling.go:207
Expected error:
    <*errors.errorString | 0xc4214dd520>: {
        s: "err waiting for DNS replicas to satisfy 3, got 2: timed out waiting for the condition",
    }
    err waiting for DNS replicas to satisfy 3, got 2: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/dns_autoscaling.go:171

Issues about this test specifically: #36569 #38446

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:473
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc422f5a000>: {
        s: "2 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                        NODE                            PHASE   GRACE CONDITIONS\nkube-dns-806549836-7ddd1                   bootstrap-e2e-minion-group-tjl7 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:10 -0700 PDT ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  }]\nkube-proxy-bootstrap-e2e-minion-group-tjl7 bootstrap-e2e-minion-group-tjl7 Failed        []\n",
    }
    2 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                        NODE                            PHASE   GRACE CONDITIONS
    kube-dns-806549836-7ddd1                   bootstrap-e2e-minion-group-tjl7 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:10 -0700 PDT ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  }]
    kube-proxy-bootstrap-e2e-minion-group-tjl7 bootstrap-e2e-minion-group-tjl7 Failed        []
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:65
Apr 25 11:00:54.353: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:144
Expected error:
    <*errors.errorString | 0xc4225636f0>: {
        s: "Error waiting for 314 pods to be running - probably a timeout: Timeout while waiting for pods with labels \"startPodsID=6205fd00-29dc-11e7-aaba-0242ac11000b\" to be running",
    }
    Error waiting for 314 pods to be running - probably a timeout: Timeout while waiting for pods with labels "startPodsID=6205fd00-29dc-11e7-aaba-0242ac11000b" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:136

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:54
Apr 25 12:08:07.384: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc422a41a50>: {
        s: "2 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                        NODE                            PHASE   GRACE CONDITIONS\nkube-dns-806549836-7ddd1                   bootstrap-e2e-minion-group-tjl7 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:10 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  }]\nkube-proxy-bootstrap-e2e-minion-group-tjl7 bootstrap-e2e-minion-group-tjl7 Failed        []\n",
    }
    2 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                        NODE                            PHASE   GRACE CONDITIONS
    kube-dns-806549836-7ddd1                   bootstrap-e2e-minion-group-tjl7 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:10 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  }]
    kube-proxy-bootstrap-e2e-minion-group-tjl7 bootstrap-e2e-minion-group-tjl7 Failed        []
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc420a49600>: {
        s: "2 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                        NODE                            PHASE   GRACE CONDITIONS\nkube-dns-806549836-7ddd1                   bootstrap-e2e-minion-group-tjl7 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:10 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  }]\nkube-proxy-bootstrap-e2e-minion-group-tjl7 bootstrap-e2e-minion-group-tjl7 Failed        []\n",
    }
    2 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                        NODE                            PHASE   GRACE CONDITIONS
    kube-dns-806549836-7ddd1                   bootstrap-e2e-minion-group-tjl7 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:10 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 08:27:00 -0700 PDT  }]
    kube-proxy-bootstrap-e2e-minion-group-tjl7 bootstrap-e2e-minion-group-tjl7 Failed        []
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:98

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1670/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 26 14:16:18.262: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-f4tt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83439616
node bootstrap-e2e-minion-group-rjq6:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83382272
node bootstrap-e2e-minion-group-tjl7:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79069184
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 26 16:29:10.910: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-f4tt:
 container "runtime": expected RSS memory (MB) < 314572800; got 321269760
node bootstrap-e2e-minion-group-rjq6:
 container "kubelet": expected RSS memory (MB) < 125829120; got 129486848
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:176
Apr 26 17:16:58.959: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://104.197.248.244:32344/hostName
retrieved map[netserver-2:{} netserver-1:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:270

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
Apr 26 21:36:08.540: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1671/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 26 23:38:23.645: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-tjl7:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81403904
node bootstrap-e2e-minion-group-f4tt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 84897792
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 27 04:46:06.844: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-f4tt:
 container "runtime": expected RSS memory (MB) < 314572800; got 335302656
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
Apr 27 07:28:31.930: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1672/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:85
Expected error:
    <*errors.errorString | 0xc421cc8400>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                            PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-n87ll bootstrap-e2e-minion-group-rjq6 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 06:06:12 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 12:47:46 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 06:06:12 -0700 PDT  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                            PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-n87ll bootstrap-e2e-minion-group-rjq6 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 06:06:12 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 12:47:46 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-25 06:06:12 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:84

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 27 13:54:37.326: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-rjq6:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79306752
node bootstrap-e2e-minion-group-tjl7:
 container "kubelet": expected RSS memory (MB) < 73400320; got 84369408
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
Apr 27 09:04:59.583: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 27 10:56:56.822: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-f4tt:
 container "runtime": expected RSS memory (MB) < 314572800; got 336416768
node bootstrap-e2e-minion-group-tjl7:
 container "runtime": expected RSS memory (MB) < 314572800; got 316657664
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1673/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-snh8
not to equal
    <string>: bootstrap-e2e-minion-group-snh8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 27 19:59:13.280: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-tjl7:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83714048
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 27 21:23:37.034: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-tjl7:
 container "runtime": expected RSS memory (MB) < 314572800; got 318533632
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1674/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-snh8
not to equal
    <string>: bootstrap-e2e-minion-group-snh8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 28 02:53:07.271: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-tjl7:
 container "kubelet": expected RSS memory (MB) < 73400320; got 86315008
node bootstrap-e2e-minion-group-zls2:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79847424
node bootstrap-e2e-minion-group-snh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73527296
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 28 01:12:23.591: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-tjl7:
 container "runtime": expected RSS memory (MB) < 314572800; got 316997632
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1675/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 28 13:31:34.355: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-snh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79609856
node bootstrap-e2e-minion-group-zls2:
 container "kubelet": expected RSS memory (MB) < 73400320; got 84082688
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-snh8
not to equal
    <string>: bootstrap-e2e-minion-group-snh8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 28 13:08:30.063: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-snh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 315469824
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1676/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 28 15:52:43.353: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-zls2:
 container "kubelet": expected RSS memory (MB) < 73400320; got 84578304
node bootstrap-e2e-minion-group-snh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78753792
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 28 20:29:33.331: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-snh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 318812160
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:54
Expected error:
    <*errors.errorString | 0xc42039f180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:246

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1677/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-snh8
not to equal
    <string>: bootstrap-e2e-minion-group-snh8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Secrets should be consumable via the environment [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 29 01:06:31.224: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-zls2:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85204992
node bootstrap-e2e-minion-group-snh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81616896
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 29 01:49:17.343: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-snh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 320004096
node bootstrap-e2e-minion-group-zls2:
 container "runtime": expected RSS memory (MB) < 314572800; got 318283776
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1681/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 30 02:08:53.133: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-snh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 348618752
node bootstrap-e2e-minion-group-zls2:
 container "runtime": expected RSS memory (MB) < 314572800; got 324227072
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 30 04:27:58.970: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-snh8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 89260032
node bootstrap-e2e-minion-group-tjl7:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80732160
node bootstrap-e2e-minion-group-zls2:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88489984
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-snh8
not to equal
    <string>: bootstrap-e2e-minion-group-snh8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
Apr 30 04:59:57.914: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1682/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:447
Expected
    <bool>: false
to equal
    <bool>: true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:5170

Issues about this test specifically: #31918

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64
Expected error:
    <*errors.StatusError | 0xc4229fc300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'read tcp 10.128.0.2:51026->10.128.0.3:4194: read: connection reset by peer'\\nTrying to reach: 'http://bootstrap-e2e-minion-group-snh8:4194/containers/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'read tcp 10.128.0.2:51026->10.128.0.3:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-snh8:4194/containers/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'read tcp 10.128.0.2:51026->10.128.0.3:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-snh8:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #35297

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 30 11:19:07.252: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-zls2:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88965120
node bootstrap-e2e-minion-group-tjl7:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80322560
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1683/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 30 17:27:31.558: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-tjl7:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80830464
node bootstrap-e2e-minion-group-zls2:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88391680
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 30 21:49:34.117: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-zls2:
 container "kubelet": expected RSS memory (MB) < 125829120; got 128356352, container "runtime": expected RSS memory (MB) < 314572800; got 324743168
node bootstrap-e2e-minion-group-tjl7:
 container "runtime": expected RSS memory (MB) < 314572800; got 317001728
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
Apr 30 22:46:58.691: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1684/
Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
May  1 04:25:30.525: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
May  1 07:57:42.818: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-zls2:
 container "runtime": expected RSS memory (MB) < 314572800; got 328146944
node bootstrap-e2e-minion-group-tjl7:
 container "runtime": expected RSS memory (MB) < 314572800; got 316710912
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-snh8
not to equal
    <string>: bootstrap-e2e-minion-group-snh8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
May  1 04:16:34.699: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-zls2:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88948736
node bootstrap-e2e-minion-group-tjl7:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81555456
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1692/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:65
May  3 16:19:14.052: Number of replicas has changed: expected 3, got 4
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:351

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
May  3 16:48:43.633: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-xs0c:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80486400
node bootstrap-e2e-minion-group-xwhn:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77516800
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:62
May  3 17:09:11.076: Number of replicas has changed: expected 3, got 4
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:351

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gce-test/1693/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
May  4 03:15:39.905: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-xs0c:
 container "runtime": expected RSS memory (MB) < 314572800; got 314744832
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: bootstrap-e2e-minion-group-z33d
not to equal
    <string>: bootstrap-e2e-minion-group-z33d
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
May  4 04:18:34.658: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-xs0c:
 container "kubelet": expected RSS memory (MB) < 73400320; got 87900160
node bootstrap-e2e-minion-group-xwhn:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83898368
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@thockin thockin removed this from the v1.6.1 milestone May 27, 2017
@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 93 days. Closing this Issue. Please reopen if you would like to work towards merging this change, if/when the Issue is ready for the next round of review.

cc @k8s-merge-robot @spxtr

You can add 'keep-open' label to prevent this from happening again, or add a comment to keep it open another 90 days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/platform/gce area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.
Projects
None yet
Development

No branches or pull requests

6 participants