Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-soak-continuous-e2e-gci-gce-1.4: broken test run #34875

Closed
k8s-github-robot opened this issue Oct 15, 2016 · 59 comments
Closed

kubernetes-soak-continuous-e2e-gci-gce-1.4: broken test run #34875

k8s-github-robot opened this issue Oct 15, 2016 · 59 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/230/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 14 17:22:54.634: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 97832960
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 101064704
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 96669696
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 14 17:55:36.018: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 89382912
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 90091520
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 87027712
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 14 20:09:15.683: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79515648
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77168640
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79380480
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:270

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 14 16:18:56.001: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

Previous issues for this suite: #34121 #34736

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Oct 15, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/231/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 14 21:48:00.417: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77017088
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80015360
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79372288
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 14 22:49:26.163: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 14 23:48:01.103: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 98287616
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 101322752
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 96808960
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 15 03:12:52.382: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 89796608
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 90140672
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85090304
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/232/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 15 06:55:43.859: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 95838208
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 99348480
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 102490112
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 15 07:36:16.180: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92377088
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 91148288
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 87375872
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 15 07:56:59.243: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77807616
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81432576
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81440768
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 15 03:22:31.800: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/233/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 15 11:52:39.564: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 90824704
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 91840512
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 87019520
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 15 12:34:53.425: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 15 14:52:32.957: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78516224
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82010112
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81440768
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 15 15:26:13.620: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 102535168
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 99348480
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 98148352
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820b3e3b0>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD                                  NODE                          PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-r3j4x jenkins-e2e-minion-group-ahr9 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-15 10:50:38 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD                                  NODE                          PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-r3j4x jenkins-e2e-minion-group-ahr9 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-15 10:50:38 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8201cf740>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD                                  NODE                          PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-r3j4x jenkins-e2e-minion-group-ahr9 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-15 11:04:28 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD                                  NODE                          PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-r3j4x jenkins-e2e-minion-group-ahr9 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-15 11:04:28 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/234/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 15 21:56:40.685: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92606464
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92434432
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88301568
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 15 18:29:15.063: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82395136
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81768448
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77828096
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 15 19:13:49.982: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 98897920
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 102572032
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 99667968
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 15 19:41:06.162: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/235/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 16 01:10:20.209: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83099648
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81715200
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78438400
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 16 01:46:32.058: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 16 03:33:03.237: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 100102144
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 102699008
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 98025472
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 15 23:37:00.118: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 87863296
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92037120
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92119040
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/236/

Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 16 04:51:17.493: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 16 06:18:31.132: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81809408
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79491072
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83324928
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 16 07:13:37.304: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 102768640
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 99831808
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 102436864
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 16 09:55:32.303: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92831744
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92782592
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 90824704
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/237/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 16 11:45:43.867: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92508160
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88752128
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 93765632
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 16 12:55:15.355: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80068608
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83861504
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81772544
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82158f9c0>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD                                  NODE                          PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-r3j4x jenkins-e2e-minion-group-ahr9 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-16 15:15:05 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD                                  NODE                          PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-r3j4x jenkins-e2e-minion-group-ahr9 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-16 15:15:05 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #27115 #28070 #30747 #31341

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 16 16:20:07.704: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 100450304
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 105644032
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 99733504
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 16 16:36:03.384: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/238/

Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 16 19:55:11.699: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 16 21:11:49.270: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92651520
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 93663232
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88543232
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8214d0a90>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD                                  NODE                          PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-r3j4x jenkins-e2e-minion-group-ahr9 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-16 21:27:52 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD                                  NODE                          PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-r3j4x jenkins-e2e-minion-group-ahr9 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-16 21:27:52 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821554c90>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD                                  NODE                          PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-r3j4x jenkins-e2e-minion-group-ahr9 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-16 21:34:39 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD                                  NODE                          PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-r3j4x jenkins-e2e-minion-group-ahr9 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-16 21:34:39 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-13 18:07:30 -0700 PDT  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28019

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 16 22:22:58.137: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83869696
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82776064
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80711680
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 17 00:17:11.293: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 106008576
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 99753984
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 102473728
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/239/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 17 04:31:26.388: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 100270080
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 104849408
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 102121472
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 17 05:36:02.456: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85995520
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83689472
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81727488
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 17 06:23:20.520: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 17 01:09:45.541: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92893184
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 91549696
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 93597696
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/240/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 17 07:38:30.404: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85381120
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 84398080
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80494592
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 17 07:53:10.662: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 17 09:43:34.255: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 90238976
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 93429760
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 95285248
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 17 10:15:21.523: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 106463232
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 102285312
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 101830656
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Oct 17, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/241/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 17 13:55:15.165: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 83886080; got 101875712
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 108662784
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 83886080; got 103342080
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 17 16:28:44.094: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 92962816
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 93282304
node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 95100928
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 17 18:28:52.228: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-ahr9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85475328
node jenkins-e2e-minion-group-hy7r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85319680
node jenkins-e2e-minion-group-m5s1:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82489344
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:420
Expected
    <string>: 
to equal
    <string>: 1590273489230082302
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:458

Issues about this test specifically: #26127 #28081

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/250/

Multiple broken tests:

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:360
Expected
    <string>: 
to equal
    <string>: 817365411733481964
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:458

Issues about this test specifically: #28010 #28427 #33997

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 19 22:16:10.389: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73678848
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 19 22:42:00.274: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-zi7i:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81334272
node jenkins-e2e-minion-group-52oj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82534400
node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82669568
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 19 23:31:11.901: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-zi7i:
 container "kubelet": expected RSS memory (MB) < 83886080; got 90345472
node jenkins-e2e-minion-group-52oj:
 container "kubelet": expected RSS memory (MB) < 83886080; got 92893184
node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 83886080; got 92450816
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/252/

Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 20 10:39:17.923: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 20 06:16:46.689: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-52oj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74756096
node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 73400320; got 75513856
node jenkins-e2e-minion-group-zi7i:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73469952
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 20 08:35:46.525: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81883136
node jenkins-e2e-minion-group-zi7i:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83709952
node jenkins-e2e-minion-group-52oj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85463040
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 20 09:45:16.421: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-52oj:
 container "kubelet": expected RSS memory (MB) < 83886080; got 95002624
node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 83886080; got 96473088
node jenkins-e2e-minion-group-zi7i:
 container "kubelet": expected RSS memory (MB) < 83886080; got 92790784
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/253/

Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 20 15:31:39.218: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 20 16:43:44.214: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-52oj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77303808
node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 73400320; got 76349440
node jenkins-e2e-minion-group-zi7i:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73596928
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 20 17:13:03.858: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-52oj:
 container "kubelet": expected RSS memory (MB) < 83886080; got 96567296
node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 83886080; got 95358976
node jenkins-e2e-minion-group-zi7i:
 container "kubelet": expected RSS memory (MB) < 83886080; got 93147136
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 20 13:57:01.178: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-52oj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85884928
node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 73400320; got 84402176
node jenkins-e2e-minion-group-zi7i:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82862080
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/254/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 20 18:40:55.642: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-52oj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88469504
node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85467136
node jenkins-e2e-minion-group-zi7i:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85753856
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820f2fae0>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD                                  NODE                          PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-v1r95 jenkins-e2e-minion-group-bt0c Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-19 07:45:46 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-20 19:23:26 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-19 07:45:46 -0700 PDT  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD                                  NODE                          PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-v1r95 jenkins-e2e-minion-group-bt0c Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-19 07:45:46 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-20 19:23:26 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-19 07:45:46 -0700 PDT  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #34223

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 20 22:44:17.988: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 73400320; got 76308480
node jenkins-e2e-minion-group-zi7i:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74678272
node jenkins-e2e-minion-group-52oj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79466496
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 20 23:07:47.769: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-52oj:
 container "kubelet": expected RSS memory (MB) < 83886080; got 97558528
node jenkins-e2e-minion-group-bt0c:
 container "kubelet": expected RSS memory (MB) < 83886080; got 101404672
node jenkins-e2e-minion-group-zi7i:
 container "kubelet": expected RSS memory (MB) < 83886080; got 94068736
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:420
Expected
    <string>: 
to equal
    <string>: 4236232223139004168
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:458

Issues about this test specifically: #26127 #28081

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/279/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8210a7990>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD                                  NODE                          PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-axfdn jenkins-e2e-minion-group-yh2o Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-25 05:32:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-26 08:37:48 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-25 05:32:58 -0700 PDT  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD                                  NODE                          PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-axfdn jenkins-e2e-minion-group-yh2o Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-25 05:32:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-26 08:37:48 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-25 05:32:58 -0700 PDT  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82023b2b0>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD                                  NODE                          PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-axfdn jenkins-e2e-minion-group-yh2o Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-25 05:32:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-26 09:19:30 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-25 05:32:58 -0700 PDT  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD                                  NODE                          PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-axfdn jenkins-e2e-minion-group-yh2o Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-10-25 05:32:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-10-26 09:19:30 -0700 PDT ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-10-25 05:32:58 -0700 PDT  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 26 06:46:23.515: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-dgbt:
 container "kubelet": expected RSS memory (MB) < 83886080; got 91234304
node jenkins-e2e-minion-group-io1r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 90877952
node jenkins-e2e-minion-group-yh2o:
 container "kubelet": expected RSS memory (MB) < 83886080; got 84643840
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 26 07:41:15.873: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-yh2o:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73515008
node jenkins-e2e-minion-group-dgbt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79507456
node jenkins-e2e-minion-group-io1r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78528512
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/281/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 26 18:03:52.459: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-dgbt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82841600
node jenkins-e2e-minion-group-io1r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80867328
node jenkins-e2e-minion-group-yh2o:
 container "kubelet": expected RSS memory (MB) < 73400320; got 76980224
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:420
Expected
    <string>: 
to equal
    <string>: 3596759788632626710
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:458

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 26 20:46:24.527: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-yh2o:
 container "kubelet": expected RSS memory (MB) < 83886080; got 88993792
node jenkins-e2e-minion-group-dgbt:
 container "kubelet": expected RSS memory (MB) < 83886080; got 93941760
node jenkins-e2e-minion-group-io1r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 91394048
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 26 17:37:21.937: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/282/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 27 05:28:20.046: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-dgbt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74231808
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 27 01:53:24.655: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-dgbt:
 container "kubelet": expected RSS memory (MB) < 83886080; got 94261248
node jenkins-e2e-minion-group-io1r:
 container "kubelet": expected RSS memory (MB) < 83886080; got 93089792
node jenkins-e2e-minion-group-yh2o:
 container "kubelet": expected RSS memory (MB) < 83886080; got 91152384
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Oct 27 02:35:33.227: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-dgbt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 84316160
node jenkins-e2e-minion-group-io1r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83509248
node jenkins-e2e-minion-group-yh2o:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79110144
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Oct 27 04:40:14.631: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/311/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:360
Expected
    <string>: 
to equal
    <string>: 1685366230701033889
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:458

Issues about this test specifically: #28010 #28427 #33997

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/313/

Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov  3 21:21:01.286: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/314/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov  4 04:41:18.119: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/315/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov  4 14:01:47.544: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/316/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov  4 14:01:47.544: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/326/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-8zzf\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-bmzm\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-v2xc\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-8zzf" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-bmzm" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-v2xc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:360
Expected
    <string>: 
to equal
    <string>: 4180067776253417990
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:458

Issues about this test specifically: #28010 #28427 #33997

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/440/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov 12 01:14:44.200: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/441/

Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov 12 03:03:05.343: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/442/

Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov 12 14:03:47.757: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/443/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov 12 17:27:02.024: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821199400>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD                                  NODE                          PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-u6cva jenkins-e2e-minion-group-2my8 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-10 10:55:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-11-12 17:32:33 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-10 10:55:44 -0800 PST  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD                                  NODE                          PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-u6cva jenkins-e2e-minion-group-2my8 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-10 10:55:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-11-12 17:32:33 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-10 10:55:44 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/444/

Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov 12 22:43:24.217: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/445/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov 13 09:31:18.526: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/446/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov 13 14:51:46.673: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/448/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1342
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/jenkins-master-data/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.190.12 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/.kube/config taint nodes jenkins-e2e-master kubernetes.io/e2e-taint-key-e2c10506-aa55-11e6-ad52-42010af00025=testing-taint-value:NoSchedule] []  <nil>  Unable to connect to the server: dial tcp 104.155.190.12:443: i/o timeout\n [] <nil> 0xc820deb0a0 exit status 1 <nil> true [0xc8202d04a0 0xc8202d04b8 0xc8202d0600] [0xc8202d04a0 0xc8202d04b8 0xc8202d0600] [0xc8202d04b0 0xc8202d05c0] [0xafab30 0xafab30] 0xc8210e4d80}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.155.190.12:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/jenkins-master-data/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.190.12 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/.kube/config taint nodes jenkins-e2e-master kubernetes.io/e2e-taint-key-e2c10506-aa55-11e6-ad52-42010af00025=testing-taint-value:NoSchedule] []  <nil>  Unable to connect to the server: dial tcp 104.155.190.12:443: i/o timeout
     [] <nil> 0xc820deb0a0 exit status 1 <nil> true [0xc8202d04a0 0xc8202d04b8 0xc8202d0600] [0xc8202d04a0 0xc8202d04b8 0xc8202d0600] [0xc8202d04b0 0xc8202d05c0] [0xafab30 0xafab30] 0xc8210e4d80}:
    Command stdout:

    stderr:
    Unable to connect to the server: dial tcp 104.155.190.12:443: i/o timeout

    error:
    exit status 1

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31066 #31967 #32219 #32535

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/452/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov 14 22:48:25.134: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8216c5e90>: {
        s: "1 / 30 pods in namespace \"kube-system\" are NOT in the desired state in 5m0s\nPOD                                  NODE                          PHASE   GRACE CONDITIONS\nmonitoring-influxdb-grafana-v4-dh5n6 jenkins-e2e-minion-group-2my8 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-13 16:02:50 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-11-14 23:43:02 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-13 16:02:50 -0800 PST  }]\n",
    }
    1 / 30 pods in namespace "kube-system" are NOT in the desired state in 5m0s
    POD                                  NODE                          PHASE   GRACE CONDITIONS
    monitoring-influxdb-grafana-v4-dh5n6 jenkins-e2e-minion-group-2my8 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-13 16:02:50 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-11-14 23:43:02 -0800 PST ContainersNotReady containers with unready status: [influxdb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-13 16:02:50 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:226

Issues about this test specifically: #28071

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Expected error:
    <errors.aggregate | len:4, cap:4>: [
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-zpmx\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-master\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-2my8\" is not ready yet",
        },
        {
            s: "Resource usage on node \"jenkins-e2e-minion-group-76v7\" is not ready yet",
        },
    ]
    [Resource usage on node "jenkins-e2e-minion-group-zpmx" is not ready yet, Resource usage on node "jenkins-e2e-master" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-2my8" is not ready yet, Resource usage on node "jenkins-e2e-minion-group-76v7" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

@calebamiles calebamiles added this to the v1.5 milestone Nov 15, 2016
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @apelisse

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/460/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/jenkins-master-data/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.245.58 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-3pm75 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ \"\\n\" }}{{ end }}{{ end }}] []  <nil>  Unable to connect to the server: dial tcp 104.198.245.58:443: i/o timeout\n [] <nil> 0xc8211dfbc0 exit status 1 <nil> true [0xc8200b2ef0 0xc8200b2f20 0xc8200b2fd8] [0xc8200b2ef0 0xc8200b2f20 0xc8200b2fd8] [0xc8200b2f18 0xc8200b2fc0] [0xafaaf0 0xafaaf0] 0xc8211ef080}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.198.245.58:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/jenkins-master-data/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.245.58 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-3pm75 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}] []  <nil>  Unable to connect to the server: dial tcp 104.198.245.58:443: i/o timeout
     [] <nil> 0xc8211dfbc0 exit status 1 <nil> true [0xc8200b2ef0 0xc8200b2f20 0xc8200b2fd8] [0xc8200b2ef0 0xc8200b2f20 0xc8200b2fd8] [0xc8200b2f18 0xc8200b2fc0] [0xafaaf0 0xafaaf0] 0xc8211ef080}:
    Command stdout:

    stderr:
    Unable to connect to the server: dial tcp 104.198.245.58:443: i/o timeout

    error:
    exit status 1

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov 17 03:41:18.430: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Nov 16 22:35:50.512: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-29vx:
 container "kubelet": expected RSS memory (MB) < 73400320; got 75513856
node jenkins-e2e-minion-group-td89:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78258176
node jenkins-e2e-minion-group-vjib:
 container "kubelet": expected RSS memory (MB) < 73400320; got 75325440
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Nov 16 23:08:22.058: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-29vx:
 container "kubelet": expected RSS memory (MB) < 83886080; got 96628736
node jenkins-e2e-minion-group-td89:
 container "kubelet": expected RSS memory (MB) < 83886080; got 100352000
node jenkins-e2e-minion-group-vjib:
 container "kubelet": expected RSS memory (MB) < 83886080; got 95354880
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Nov 16 23:58:25.170: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-29vx:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85753856
node jenkins-e2e-minion-group-td89:
 container "kubelet": expected RSS memory (MB) < 73400320; got 87687168
node jenkins-e2e-minion-group-vjib:
 container "kubelet": expected RSS memory (MB) < 73400320; got 84897792
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gci-gce-1.4/461/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Nov 16 23:08:22.058: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-29vx:
 container "kubelet": expected RSS memory (MB) < 83886080; got 96628736
node jenkins-e2e-minion-group-td89:
 container "kubelet": expected RSS memory (MB) < 83886080; got 100352000
node jenkins-e2e-minion-group-vjib:
 container "kubelet": expected RSS memory (MB) < 83886080; got 95354880
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Nov 16 23:58:25.170: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-29vx:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85753856
node jenkins-e2e-minion-group-td89:
 container "kubelet": expected RSS memory (MB) < 73400320; got 87687168
node jenkins-e2e-minion-group-vjib:
 container "kubelet": expected RSS memory (MB) < 73400320; got 84897792
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #28220 #32942

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/jenkins-master-data/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.245.58 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-3pm75 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ \"\\n\" }}{{ end }}{{ end }}] []  <nil>  Unable to connect to the server: dial tcp 104.198.245.58:443: i/o timeout\n [] <nil> 0xc8211dfbc0 exit status 1 <nil> true [0xc8200b2ef0 0xc8200b2f20 0xc8200b2fd8] [0xc8200b2ef0 0xc8200b2f20 0xc8200b2fd8] [0xc8200b2f18 0xc8200b2fc0] [0xafaaf0 0xafaaf0] 0xc8211ef080}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.198.245.58:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/jenkins-master-data/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.245.58 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-soak-weekly-deploy-gci-gce-1.4/workspace/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-3pm75 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}] []  <nil>  Unable to connect to the server: dial tcp 104.198.245.58:443: i/o timeout
     [] <nil> 0xc8211dfbc0 exit status 1 <nil> true [0xc8200b2ef0 0xc8200b2f20 0xc8200b2fd8] [0xc8200b2ef0 0xc8200b2f20 0xc8200b2fd8] [0xc8200b2f18 0xc8200b2fc0] [0xafaaf0 0xafaaf0] 0xc8211ef080}:
    Command stdout:

    stderr:
    Unable to connect to the server: dial tcp 104.198.245.58:443: i/o timeout

    error:
    exit status 1

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Nov 17 03:41:18.430: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:279

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Nov 16 22:35:50.512: Memory usage exceeding limits:
 node jenkins-e2e-minion-group-29vx:
 container "kubelet": expected RSS memory (MB) < 73400320; got 75513856
node jenkins-e2e-minion-group-td89:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78258176
node jenkins-e2e-minion-group-vjib:
 container "kubelet": expected RSS memory (MB) < 73400320; got 75325440
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153

Issues about this test specifically: #26784 #28384 #31935 #33023

@dims dims removed this from the v1.5 milestone Nov 17, 2016
@calebamiles
Copy link
Contributor

@jessfraz, you may want to pull in some people to triage this issue, all the failures look like they're on the 1.4 branch.

cc: @saad-ali, @dims

@calebamiles calebamiles modified the milestone: v1.6 Mar 3, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

6 participants