Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gce-serial: broken test run #39406

Closed
k8s-github-robot opened this issue Jan 4, 2017 · 213 comments
Closed

ci-kubernetes-e2e-gce-serial: broken test run #39406

k8s-github-robot opened this issue Jan 4, 2017 · 213 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/274/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  3 19:14:03.862: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc4210a7700>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421d3c450>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #36914

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:477
Jan  3 20:17:46.137: Node bootstrap-e2e-minion-group-zs9l did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  3 16:49:48.073: Node bootstrap-e2e-minion-group-zs9l did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:161
Failed waiting for pod wrapped-volume-race-419da26b-d21d-11e6-b4ca-0242ac110009-0p2gg to enter running state
Expected error:
    <*errors.errorString | 0xc420346ee0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:388

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:344
Expected error:
    <*errors.errorString | 0xc421d84010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:329

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421c88810>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:459
Expected error:
    <*errors.errorString | 0xc421258850>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:425

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Previous issues for this suite: #37409 #37610 #37956

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Jan 4, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/275/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*errors.errorString | 0xc420dee810>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:355

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:215
Jan  4 01:52:42.926: Image puller didn't complete in 8m0s, not running resource usage test since the metrics might be adultrated
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:205

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:59
Expected error:
    <*errors.errorString | 0xc42043f770>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141

Issues about this test specifically: #29512

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4211a8ff0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420d5d540>: {
        s: "1 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                            PHASE   GRACE CONDITIONS\nnode-problem-detector-v0.1-1cwfj bootstrap-e2e-minion-group-d8qb Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:42:35 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:43:52 -0800 PST ContainersNotReady containers with unready status: [node-problem-detector]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:42:42 -0800 PST  }]\n",
    }
    1 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                            PHASE   GRACE CONDITIONS
    node-problem-detector-v0.1-1cwfj bootstrap-e2e-minion-group-d8qb Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:42:35 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:43:52 -0800 PST ContainersNotReady containers with unready status: [node-problem-detector]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:42:42 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:96

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421192000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420d5c1d0>: {
        s: "1 / 33 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                              NODE                            PHASE   GRACE CONDITIONS\nnode-problem-detector-v0.1-1cwfj bootstrap-e2e-minion-group-d8qb Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:42:35 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:43:52 -0800 PST ContainersNotReady containers with unready status: [node-problem-detector]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:42:42 -0800 PST  }]\n",
    }
    1 / 33 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                              NODE                            PHASE   GRACE CONDITIONS
    node-problem-detector-v0.1-1cwfj bootstrap-e2e-minion-group-d8qb Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:42:35 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:43:52 -0800 PST ContainersNotReady containers with unready status: [node-problem-detector]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-03 21:42:42 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:96

Issues about this test specifically: #36914

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc420d5d670>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:170
Failed waiting for pod wrapped-volume-race-fe476e72-d24a-11e6-9fbc-0242ac110002-293nh to enter running state
Expected error:
    <*errors.errorString | 0xc42043f770>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:388

Issues about this test specifically: #32945

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:374
Expected error:
    <*errors.errorString | 0xc4209b4050>: {
        s: "service verification failed for: 10.0.50.146\nexpected [service1-17qfp service1-7958t service1-rwz97]\nreceived []",
    }
    service verification failed for: 10.0.50.146
    expected [service1-17qfp service1-7958t service1-rwz97]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:353

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:297
Expected error:
    <*errors.errorString | 0xc4211a8010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:282

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.errorString | 0xc421192b70>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:349

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:316
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421114010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:263

Issues about this test specifically: #37259

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:477
Jan  3 22:07:16.968: Node bootstrap-e2e-minion-group-p0lb did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #36950

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/277/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  4 06:20:41.929: Node bootstrap-e2e-minion-group-t18l did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc42194c090>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc42194cd80>: {
        s: "Only 7 pods started out of 10",
    }
    Only 7 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:477
Jan  4 06:52:51.813: Node bootstrap-e2e-minion-group-m4dz did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #36950

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc42194c1d0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Expected error:
    <*errors.errorString | 0xc42140cc00>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:387

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  4 04:28:06.676: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/278/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Jan  4 12:54:21.158: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:374
Expected error:
    <*errors.errorString | 0xc421384450>: {
        s: "service verification failed for: 10.0.238.171\nexpected [service1-qxm4r service1-rjzrr service1-wz6z2]\nreceived []",
    }
    service verification failed for: 10.0.238.171
    expected [service1-qxm4r service1-rjzrr service1-wz6z2]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:353

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc420645df0>: {
        s: "err waiting for DNS replicas to satisfy 4, got 1: timed out waiting for the condition",
    }
    err waiting for DNS replicas to satisfy 4, got 1: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:97

Issues about this test specifically: #36457

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4213b61a0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28853 #31585

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Jan  4 13:25:01.669: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Pod Disks should be able to detach from a node whose api object was deleted [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:531
Expected error:
    <*errors.errorString | 0xc4203a16c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:512

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc420fcc030>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] PersistentVolumes [Disruptive] when kubelet restarts Should test that a file written to the mount before kubelet restart is stat-able after restart. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:129
Expected error:
    <*errors.errorString | 0xc4203a16c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:212

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  4 10:47:26.861: Node bootstrap-e2e-minion-group-tht8 did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Expected error:
    <*errors.errorString | 0xc4213b61b0>: {
        s: "service verification failed for: 10.0.184.156\nexpected [service1-7vg7z service1-dr42w service1-zmrlt]\nreceived []",
    }
    service verification failed for: 10.0.184.156
    expected [service1-7vg7z service1-dr42w service1-zmrlt]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:396

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:245
Jan  4 11:16:04.940: Couldn't restore the original cluster size: timeout waiting 10m0s for cluster size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:236

Issues about this test specifically: #27470 #30156 #34304 #37620

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/279/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4210585d0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #29516

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  4 14:41:30.689: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:893

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420901020>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #35279

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*errors.errorString | 0xc4208d0b70>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:355

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/firewall.go:116
Expected error:
    <*errors.errorString | 0xc4203adb20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:627

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:170
Failed waiting for pod wrapped-volume-race-0a23091a-d2d1-11e6-b6e7-0242ac110004-26jps to enter running state
Expected error:
    <*errors.errorString | 0xc4203adb20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:388

Issues about this test specifically: #32945

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc42098aef0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28071

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:126
Expected error:
    <*errors.errorString | 0xc4206429e0>: {
        s: "error waiting for node bootstrap-e2e-minion-group-55ff boot ID to change: timed out waiting for the condition",
    }
    error waiting for node bootstrap-e2e-minion-group-55ff boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:100

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan  4 19:03:31.161: Couldn't delete ns: "e2e-tests-rescheduler-s3clf": namespace e2e-tests-rescheduler-s3clf was not deleted with limit: timed out waiting for the condition, pods remaining: 5, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-rescheduler-s3clf was not deleted with limit: timed out waiting for the condition, pods remaining: 5, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:161
Failed waiting for pod wrapped-volume-race-d9b1760c-d2d3-11e6-b6e7-0242ac110004-6cvb2 to enter running state
Expected error:
    <*errors.errorString | 0xc4203adb20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:388

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/280/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc42135a130>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #35279

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  4 21:02:51.987: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*errors.errorString | 0xc4215d2b70>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:355

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:316
Jan  5 00:25:30.348: Node bootstrap-e2e-minion-group-xczd did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:215
Jan  4 23:57:31.889: Image puller didn't complete in 8m0s, not running resource usage test since the metrics might be adultrated
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:205

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc420d4e010>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  4 20:23:34.411: Node bootstrap-e2e-minion-group-sg5j did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:279
error waiting for daemon pod to not be running on nodes
Expected error:
    <*errors.errorString | 0xc4203593e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:274

Issues about this test specifically: #30441

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc421a8a030>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421a00000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/284/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Jan  5 11:53:34.904: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:477
Jan  5 11:52:03.722: Node bootstrap-e2e-minion-group-7q6c did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #36950

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  5 11:22:01.620: Node bootstrap-e2e-minion-group-7q6c did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421859240>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #36914

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:321
Expected error:
    <*errors.errorString | 0xc4206ae370>: {
        s: "error getting SSH client to jenkins@104.198.186.67:22: 'dial tcp 104.198.186.67:22: getsockopt: connection timed out'",
    }
    error getting SSH client to jenkins@104.198.186.67:22: 'dial tcp 104.198.186.67:22: getsockopt: connection timed out'
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:63

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.errorString | 0xc420c00f80>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:362

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  5 12:36:04.289: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: [k8s.io] PersistentVolumes [Disruptive] when kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:129
Expected error:
    <*errors.errorString | 0xc4203d3ba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:212

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:301
Expected error:
    <*errors.errorString | 0xc420c000b0>: {
        s: "Error while waiting for replication controller daemonrestart10-013265ae-d37a-11e6-b4aa-0242ac110007 pods to be running: Timeout while waiting for pods with labels \"name=daemonrestart10-013265ae-d37a-11e6-b4aa-0242ac110007\" to be running",
    }
    Error while waiting for replication controller daemonrestart10-013265ae-d37a-11e6-b4aa-0242ac110007 pods to be running: Timeout while waiting for pods with labels "name=daemonrestart10-013265ae-d37a-11e6-b4aa-0242ac110007" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:300

Issues about this test specifically: #31407

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:206
error waiting for daemon pod to not be running on nodes
Expected error:
    <*errors.errorString | 0xc4203d3ba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:201

Issues about this test specifically: #35277

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  5 12:29:35.727: Node bootstrap-e2e-minion-group-7q6c did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:150
Expected error:
    <*errors.errorString | 0xc4203d3ba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:124

Issues about this test specifically: #31428

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:245
Jan  5 13:42:00.800: Couldn't restore the original cluster size: timeout waiting 10m0s for cluster size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:236

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:316
Jan  5 11:56:02.015: Node bootstrap-e2e-minion-group-7q6c did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37259

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/285/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  5 19:41:12.877: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc421728420>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:316
Jan  5 16:32:04.521: Node bootstrap-e2e-minion-group-3895 did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37259

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Jan  5 16:46:42.098: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421581d60>: {
        s: "1 / 5 pods in namespace \"kube-system\" are NOT in SUCCESS state in 5m0s\nPOD                                              NODE                            PHASE   GRACE CONDITIONS\ne2e-image-puller-bootstrap-e2e-minion-group-2cn5 bootstrap-e2e-minion-group-2cn5 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:54 -0800 PST ContainersNotReady containers with unready status: [nethealth-check]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:50 -0800 PST  }]\n",
    }
    1 / 5 pods in namespace "kube-system" are NOT in SUCCESS state in 5m0s
    POD                                              NODE                            PHASE   GRACE CONDITIONS
    e2e-image-puller-bootstrap-e2e-minion-group-2cn5 bootstrap-e2e-minion-group-2cn5 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:54 -0800 PST ContainersNotReady containers with unready status: [nethealth-check]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:50 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:96

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  5 19:01:00.175: Node bootstrap-e2e-minion-group-kfbl did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420fac310>: {
        s: "1 / 5 pods in namespace \"kube-system\" are NOT in SUCCESS state in 5m0s\nPOD                                              NODE                            PHASE   GRACE CONDITIONS\ne2e-image-puller-bootstrap-e2e-minion-group-2cn5 bootstrap-e2e-minion-group-2cn5 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:54 -0800 PST ContainersNotReady containers with unready status: [nethealth-check]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:50 -0800 PST  }]\n",
    }
    1 / 5 pods in namespace "kube-system" are NOT in SUCCESS state in 5m0s
    POD                                              NODE                            PHASE   GRACE CONDITIONS
    e2e-image-puller-bootstrap-e2e-minion-group-2cn5 bootstrap-e2e-minion-group-2cn5 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:54 -0800 PST ContainersNotReady containers with unready status: [nethealth-check]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:50 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:96

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421580ab0>: {
        s: "1 / 5 pods in namespace \"kube-system\" are NOT in SUCCESS state in 5m0s\nPOD                                              NODE                            PHASE   GRACE CONDITIONS\ne2e-image-puller-bootstrap-e2e-minion-group-2cn5 bootstrap-e2e-minion-group-2cn5 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:54 -0800 PST ContainersNotReady containers with unready status: [nethealth-check]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:50 -0800 PST  }]\n",
    }
    1 / 5 pods in namespace "kube-system" are NOT in SUCCESS state in 5m0s
    POD                                              NODE                            PHASE   GRACE CONDITIONS
    e2e-image-puller-bootstrap-e2e-minion-group-2cn5 bootstrap-e2e-minion-group-2cn5 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:43 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:54 -0800 PST ContainersNotReady containers with unready status: [nethealth-check]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:15:50 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:96

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:477
Jan  5 20:13:46.110: Node bootstrap-e2e-minion-group-kpbs did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #36950

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc4215b75e0>: {
        s: "Only 7 pods started out of 10",
    }
    Only 7 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Issues about this test specifically: #31407

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  5 17:06:40.458: Node bootstrap-e2e-minion-group-j7kx did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:479
Expected error:
    <*errors.errorString | 0xc420315310>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:450

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc4212b42c0>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/286/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420a66360>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420efe000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc42129e1e0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:126
Expected error:
    <*errors.errorString | 0xc420f73f20>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.011990544s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.011990544s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:82

Issues about this test specifically: #26744 #26929 #38552

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] PersistentVolumes [Disruptive] when kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:157
Expected
    <int>: 0
not to be zero-valued
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:198

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  5 22:35:00.080: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"bootstrap-e2e-minion-group-11dn\" is not ready yet",
        },
    ]
    Resource usage on node "bootstrap-e2e-minion-group-11dn" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc420d326d0>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan  5 22:31:28.932: Couldn't delete ns: "e2e-tests-rescheduler-jgp2q": namespace e2e-tests-rescheduler-jgp2q was not deleted with limit: timed out waiting for the condition, pods remaining: 16, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-rescheduler-jgp2q was not deleted with limit: timed out waiting for the condition, pods remaining: 16, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:150
Expected error:
    <*errors.errorString | 0xc420347240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:124

Issues about this test specifically: #31428

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/287/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420d9ecc0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  6 02:47:01.421: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan  6 03:16:17.481: Couldn't delete ns: "e2e-tests-rescheduler-3gx61": namespace e2e-tests-rescheduler-3gx61 was not deleted with limit: timed out waiting for the condition, pods remaining: 14, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-rescheduler-3gx61 was not deleted with limit: timed out waiting for the condition, pods remaining: 14, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] PersistentVolumes [Disruptive] when kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:129
Expected error:
    <*errors.errorString | 0xc42037e580>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:212

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4216592d0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421062000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #31918

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <*errors.errorString | 0xc4217edcb0>: {
        s: "Only 309 pods started out of 400",
    }
    Only 309 pods started out of 400
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:79

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  6 04:59:56.257: Node bootstrap-e2e-minion-group-w2b8 did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan  6 03:31:42.963: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc420298600>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] PersistentVolumes [Disruptive] when kubelet restarts Should test that a file written to the mount before kubelet restart is stat-able after restart. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:129
Expected error:
    <*errors.errorString | 0xc42037e580>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:212

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Jan  6 07:29:35.434: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/288/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421418000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #35279

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:150
Expected error:
    <*errors.errorString | 0xc4203a5620>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:124

Issues about this test specifically: #31428

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:316
Jan  6 08:51:43.390: Node bootstrap-e2e-minion-group-vv2h did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37259

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420ffe000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc421a30e90>: {
        s: "Only 9 pods started out of 10",
    }
    Only 9 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Issues about this test specifically: #31407

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/firewall.go:116
Jan  6 11:57:06.811: Timeout waiting for service "firewall-test-loadbalancer" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:453

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:245
Jan  6 09:14:01.144: Couldn't restore the original cluster size: timeout waiting 10m0s for cluster size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:236

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  6 08:49:15.737: Node bootstrap-e2e-minion-group-vv2h did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc420c37a00>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

Failed: [k8s.io] PersistentVolumes [Disruptive] when kubelet restarts Should test that a file written to the mount before kubelet restart is stat-able after restart. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:129
Expected error:
    <*errors.errorString | 0xc4203a5620>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:212

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/289/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  6 16:50:11.806: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421574010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:328

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:316
Jan  6 14:14:31.655: Node bootstrap-e2e-minion-group-xm7c did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37259

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.errorString | 0xc421252be0>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:362

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc4216a93e0>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:477
Jan  6 14:08:41.995: Node bootstrap-e2e-minion-group-xm7c did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #36950

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421252120>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #31918

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Jan  6 14:54:21.953: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] PersistentVolumes [Disruptive] when kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:129
Expected error:
    <*errors.errorString | 0xc4203d1650>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:212

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  6 15:29:39.699: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:893

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc4211f6060>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/290/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4214a4000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420df2ca0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc420abeec0>: {
        s: "Only 9 pods started out of 10",
    }
    Only 9 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Jan  6 23:08:19.520: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Jan  6 18:22:43.812: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"bootstrap-e2e-minion-group-2j8t\" is not ready yet",
        },
    ]
    Resource usage on node "bootstrap-e2e-minion-group-2j8t" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421277720>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #34223

Failed: [k8s.io] PersistentVolumes [Disruptive] when kubelet restarts Should test that a file written to the mount before kubelet restart is stat-able after restart. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:129
Expected error:
    <*errors.errorString | 0xc4203a35f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:212

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Jan  6 22:52:07.142: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc420f912c0>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  6 21:24:34.682: Node bootstrap-e2e-minion-group-t2qx did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/291/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes [Disruptive] when kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:129
Expected error:
    <*errors.errorString | 0xc42038d420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:212

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan  7 01:01:13.328: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4212a6a30>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #31918

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  6 23:37:55.344: Node bootstrap-e2e-minion-group-k5tl did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:479
Expected error:
    <*errors.errorString | 0xc42038d420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:450

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc420768030>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:404
Expected error:
    <*errors.errorString | 0xc42174d290>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.013958691s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.013958691s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:399

Issues about this test specifically: #37373

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:170
Failed waiting for pod wrapped-volume-race-60170df9-d4b1-11e6-8f66-0242ac110008-2108k to enter running state
Expected error:
    <*errors.errorString | 0xc42038d420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:388

Issues about this test specifically: #32945

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <*errors.errorString | 0xc420c2f310>: {
        s: "Only 302 pods started out of 400",
    }
    Only 302 pods started out of 400
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:79

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4212a7290>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Jan  7 01:33:19.523: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:206
error waiting for daemon pod to not be running on nodes
Expected error:
    <*errors.errorString | 0xc42038d420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:201

Issues about this test specifically: #35277

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Jan  6 23:58:39.227: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:279
error waiting for daemon pod to not be running on nodes
Expected error:
    <*errors.errorString | 0xc42038d420>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:274

Issues about this test specifically: #30441

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/292/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  7 05:44:18.185: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:221
Expected error:
    <*errors.errorString | 0xc420f1b620>: {
        s: "Pod name reserve-all-cpu: Gave up waiting 5m0s for 100 pods to come up",
    }
    Pod name reserve-all-cpu: Gave up waiting 5m0s for 100 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:54

Issues about this test specifically: #29933 #34111 #38765

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4206ead80>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  7 04:46:40.247: Node bootstrap-e2e-minion-group-5hqt did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc420d28ff0>: {
        s: "Only 9 pods started out of 10",
    }
    Only 9 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Expected error:
    <*errors.errorString | 0xc421072060>: {
        s: "service verification failed for: 10.0.56.6\nexpected [service1-dhd6j service1-dlmkd service1-vsc2p]\nreceived []",
    }
    service verification failed for: 10.0.56.6
    expected [service1-dhd6j service1-dlmkd service1-vsc2p]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:396

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:245
Jan  7 06:37:45.869: Couldn't restore the original cluster size: timeout waiting 10m0s for cluster size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:236

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"bootstrap-e2e-minion-group-5hqt\" is not ready yet",
        },
    ]
    Resource usage on node "bootstrap-e2e-minion-group-5hqt" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:479
Expected error:
    <*errors.errorString | 0xc4203c3500>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:450

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc420de8cb0>: {
        s: "Only 7 pods started out of 10",
    }
    Only 7 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Issues about this test specifically: #31407

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:206
error waiting for daemon pod to not be running on nodes
Expected error:
    <*errors.errorString | 0xc4203c3500>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:201

Issues about this test specifically: #35277

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan  7 09:16:31.295: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc42131f6d0>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4206eb520>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28071

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc4206eb170>: {
        s: "Only 9 pods started out of 10",
    }
    Only 9 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/293/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4218542d0>: {
        s: "1 / 4 pods in namespace \"kube-system\" are NOT in SUCCESS state in 5m0s\nPOD                                              NODE                            PHASE   GRACE CONDITIONS\ne2e-image-puller-bootstrap-e2e-minion-group-4knl bootstrap-e2e-minion-group-4knl Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 10:20:31 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 10:20:31 -0800 PST ContainersNotReady containers with unready status: [nethealth-check]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 10:20:38 -0800 PST  }]\n",
    }
    1 / 4 pods in namespace "kube-system" are NOT in SUCCESS state in 5m0s
    POD                                              NODE                            PHASE   GRACE CONDITIONS
    e2e-image-puller-bootstrap-e2e-minion-group-4knl bootstrap-e2e-minion-group-4knl Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 10:20:31 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-07 10:20:31 -0800 PST ContainersNotReady containers with unready status: [nethealth-check]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-07 10:20:38 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:96

Issues about this test specifically: #35279

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:477
Jan  7 12:46:30.798: Node bootstrap-e2e-minion-group-bgdh did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #36950

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan  7 12:24:10.554: Couldn't delete ns: "e2e-tests-rescheduler-g3jr3": namespace e2e-tests-rescheduler-g3jr3 was not deleted with limit: timed out waiting for the condition, pods remaining: 15, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-rescheduler-g3jr3 was not deleted with limit: timed out waiting for the condition, pods remaining: 15, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc4218a6060>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:404
Expected error:
    <*errors.errorString | 0xc42193a950>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.009824623s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.009824623s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:399

Issues about this test specifically: #37373

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:215
Jan  7 12:40:21.695: Image puller didn't complete in 8m0s, not running resource usage test since the metrics might be adultrated
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:205

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pod Disks should be able to detach from a node whose api object was deleted [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:531
Expected error:
    <*errors.errorString | 0xc4203793d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:512

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  7 09:53:34.808: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  7 12:43:59.417: Node bootstrap-e2e-minion-group-nwr1 did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc42111ca10>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Jan  7 09:50:12.576: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:316
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420fee010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:263

Issues about this test specifically: #37259

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421968080>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:215
Jan  7 12:32:10.820: Image puller didn't complete in 8m0s, not running resource usage test since the metrics might be adultrated
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:205

Issues about this test specifically: #26784 #28384 #31935 #33023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/294/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:316
Jan  7 17:45:43.364: Node bootstrap-e2e-minion-group-jc76 did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37259

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:161
Failed waiting for pod wrapped-volume-race-a09bcc34-d551-11e6-af41-0242ac110009-47532 to enter running state
Expected error:
    <*errors.errorString | 0xc42038e120>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:388

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc420dffd20>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  7 15:20:39.576: Node bootstrap-e2e-minion-group-jc76 did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Jan  7 16:43:41.921: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:404
Expected error:
    <*errors.errorString | 0xc42195e610>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.008965537s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.008965537s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:399

Issues about this test specifically: #37373

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:477
Jan  7 19:44:18.631: Node bootstrap-e2e-minion-group-vqrx did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #36950

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc42160c9c0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #31918

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:479
Expected error:
    <*errors.errorString | 0xc42038e120>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:450

Failed: [k8s.io] Pod Disks should be able to detach from a node whose api object was deleted [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:531
Expected error:
    <*errors.errorString | 0xc42038e120>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:512

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  7 19:47:09.777: Node bootstrap-e2e-minion-group-cs6d did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/295/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:404
Expected error:
    <*errors.errorString | 0xc421970550>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.00802731s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.00802731s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:399

Issues about this test specifically: #37373

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  7 23:22:32.772: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:316
Jan  7 23:57:03.960: Node bootstrap-e2e-minion-group-bxcw did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37259

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:245
Jan  7 21:47:35.321: Couldn't restore the original cluster size: timeout waiting 10m0s for cluster size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:236

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:477
Jan  7 20:47:40.046: Node bootstrap-e2e-minion-group-bxcw did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #36950

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:206
error waiting for daemon pod to not be running on nodes
Expected error:
    <*errors.errorString | 0xc4203ab3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:201

Issues about this test specifically: #35277

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc421a84ed0>: {
        s: "Only 8 pods started out of 10",
    }
    Only 8 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/firewall.go:116
Jan  7 23:53:26.611: Timeout waiting for service "firewall-test-loadbalancer" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:453

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  7 20:36:52.652: Node bootstrap-e2e-minion-group-mqvw did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:374
Expected error:
    <*errors.errorString | 0xc4219d7030>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:334

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc42115a390>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421d82670>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28853 #31585

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/296/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  8 04:13:39.995: Node bootstrap-e2e-minion-group-1sr9 did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/etcd_failure.go:67
Expected error:
    <*errors.errorString | 0xc4203a5450>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141

Issues about this test specifically: #29444

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc420aeee30>: {
        s: "Only 8 pods started out of 10",
    }
    Only 8 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Issues about this test specifically: #31407

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.errorString | 0xc42187a910>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:362

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:297
Expected error:
    <*errors.errorString | 0xc4219b6020>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:282

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/firewall.go:116
Jan  8 03:04:27.637: Timeout waiting for service "firewall-test-loadbalancer" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:453

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:215
Jan  8 02:18:13.876: Image puller didn't complete in 8m0s, not running resource usage test since the metrics might be adultrated
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:205

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421618050>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc420843560>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc4219ea400>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4217f0000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:215
Jan  8 04:11:18.634: Image puller didn't complete in 8m0s, not running resource usage test since the metrics might be adultrated
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:205

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  8 03:55:21.330: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.errorString | 0xc4219eaac0>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:349

Issues about this test specifically: #28657 #30519 #33878

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/297/
Multiple broken tests:

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc421aeae50>: {
        s: "Only 7 pods started out of 10",
    }
    Only 7 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] PersistentVolumes [Disruptive] when kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:129
Expected error:
    <*errors.errorString | 0xc4203ab520>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:212

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  8 07:00:48.170: Node bootstrap-e2e-minion-group-105b did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:404
Jan  8 07:31:06.212: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:893

Issues about this test specifically: #37373

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Jan  8 08:11:33.502: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421aeb290>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4219003b0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/298/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421a20300>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420d04050>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:248
Jan  8 12:00:38.186: Failed to observe node ready status change to false
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:90

Issues about this test specifically: #36794

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  8 12:08:10.209: Node bootstrap-e2e-minion-group-sdmf did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4211a4000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #36914

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc4216fa380>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc421878a50>: {
        s: "Only 5 pods started out of 10",
    }
    Only 5 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Expected error:
    <*errors.errorString | 0xc420d04070>: {
        s: "error while waiting for pods gone rc: timed out waiting for the condition",
    }
    error while waiting for pods gone rc: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:308

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/299/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:360
Jan  8 17:04:36.106: Node bootstrap-e2e-minion-group-8q2n did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #37479

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:150
Expected error:
    <*errors.errorString | 0xc420348cc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:124

Issues about this test specifically: #31428

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc420f32080>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:479
Expected error:
    <*errors.errorString | 0xc420348cc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:450

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Jan  8 20:52:39.060: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc420c20030>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.errorString | 0xc420a2eab0>: {
        s: "Only 3 pods started out of 5",
    }
    Only 3 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:349

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421260000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <errors.aggregate | len:2, cap:2>: [
        {
            s: "Resource usage on node \"bootstrap-e2e-minion-group-4grh\" is not ready yet",
        },
        {
            s: "Resource usage on node \"bootstrap-e2e-minion-group-8q2n\" is not ready yet",
        },
    ]
    [Resource usage on node "bootstrap-e2e-minion-group-4grh" is not ready yet, Resource usage on node "bootstrap-e2e-minion-group-8q2n" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:106

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:404
Expected error:
    <*errors.errorString | 0xc4206745f0>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.007636367s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.007636367s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:399

Issues about this test specifically: #37373

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421260000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:434
Jan  8 17:02:09.033: Node bootstrap-e2e-minion-group-8q2n did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/300/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc4212f4790>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc42128ccf0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #35279

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:245
Jan  8 22:35:50.531: Couldn't restore the original cluster size: timeout waiting 10m0s for cluster size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:236

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] PersistentVolumes [Disruptive] when kubelet restarts Should test that a file written to the mount before kubelet restart is stat-able after restart. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:129
Expected error:
    <*errors.errorString | 0xc4203ad330>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:212

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:161
Failed waiting for pod wrapped-volume-race-574907a6-d638-11e6-8958-0242ac110003-1dqtl to enter running state
Expected error:
    <*errors.errorString | 0xc4203ad330>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:388

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420176000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:479
Expected error:
    <*errors.errorString | 0xc4203ad330>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:450

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc421524c50>: {
        s: "Only 8 pods started out of 10",
    }
    Only 8 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Issues about this test specifically: #31407

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan  8 22:52:52.080: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:477
Jan  9 00:27:11.925: Node bootstrap-e2e-minion-group-jq9c did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #36950

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/301/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:477
Jan  9 08:01:26.061: Node bootstrap-e2e-minion-group-4s11 did not become not-ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:66

Issues about this test specifically: #36950

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420d04360>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:321
Expected error:
    <*errors.errorString | 0xc4211140f0>: {
        s: "error getting SSH client to jenkins@104.198.161.165:22: 'dial tcp 104.198.161.165:22: getsockopt: connection timed out'",
    }
    error getting SSH client to jenkins@104.198.161.165:22: 'dial tcp 104.198.161.165:22: getsockopt: connection timed out'
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:63

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420e8aa80>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420d04000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #29516

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc420d3a050>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Jan  9 05:54:57.967: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Jan  9 03:56:11.867: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:143
Expected error:
    <*errors.errorString | 0xc420648260>: {
        s: "Error waiting for 422 pods to be running - probably a timeout: Timeout while waiting for pods with labels \"startPodsID=ce522787-d662-11e6-a5a1-0242ac110006\" to be running",
    }
    Error waiting for 422 pods to be running - probably a timeout: Timeout while waiting for pods with labels "startPodsID=ce522787-d662-11e6-a5a1-0242ac110006" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:134

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Jan  9 05:38:45.687: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27397 #27917 #31592

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/302/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:404
Expected error:
    <*errors.errorString | 0xc420efb510>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.009158296s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.009158296s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:399

Issues about this test specifically: #37373

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Jan  9 09:13:33.569: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:271
Expected error:
    <*errors.errorString | 0xc4212d3470>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:262

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420b22000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc420930000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Failed: [k8s.io] Pod Disks should be able to detach from a node whose api object was deleted [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:531
Expected error:
    <*errors.errorString | 0xc420327560>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:512

Failed: [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:248
Expected error:
    <*errors.errorString | 0xc4208e6a90>: {
        s: "Only 9 pods started out of 10",
    }
    Only 9 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:216

Issues about this test specifically: #31407

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Jan  9 10:22:48.408: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan  9 11:51:24.839: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:287

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/743/
Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:244
Expected error:
    <*errors.errorString | 0xc420c4e160>: {
        s: "1 / 3 pods in namespace \"kube-system\" are NOT in SUCCESS state in 5m0s\nPOD                                              NODE                            PHASE   GRACE CONDITIONS\ne2e-image-puller-bootstrap-e2e-minion-group-pp41 bootstrap-e2e-minion-group-pp41 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-03 19:26:37 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-03 19:26:40 -0800 PST ContainersNotReady containers with unready status: [nethealth-check]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-03 19:26:37 -0800 PST  }]\n",
    }
    1 / 3 pods in namespace "kube-system" are NOT in SUCCESS state in 5m0s
    POD                                              NODE                            PHASE   GRACE CONDITIONS
    e2e-image-puller-bootstrap-e2e-minion-group-pp41 bootstrap-e2e-minion-group-pp41 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-03 19:26:37 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-03 19:26:40 -0800 PST ContainersNotReady containers with unready status: [nethealth-check]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-03 19:26:37 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:241

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:129
error waiting for daemon pod to start
Expected error:
    <*errors.errorString | 0xc4203be4e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:118

Issues about this test specifically: #31428

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <*errors.errorString | 0xc421ebd370>: {
        s: "4 containers failed which is more than allowed 3",
    }
    4 containers failed which is more than allowed 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:79

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Daemon set [Serial] should retry creating failed daemon pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:234
error waiting for daemon pod to start
Expected error:
    <*errors.errorString | 0xc4203be4e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:221

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc421eac650>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:262
error waiting for daemon pod to start
Expected error:
    <*errors.errorString | 0xc4203be4e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:246

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  3 19:10:59.226: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/744/
Multiple broken tests:

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:262
error waiting for daemon pod to start
Expected error:
    <*errors.errorString | 0xc4203a1250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:246

Failed: [k8s.io] PersistentVolumes [Volume][Serial] [k8s.io] PersistentVolumes:GCEPD should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:394
Expected error:
    <*errors.errorString | 0xc4203a1250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:389

Failed: [k8s.io] Daemon set [Serial] should retry creating failed daemon pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:234
error waiting for daemon pod to start
Expected error:
    <*errors.errorString | 0xc4203a1250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:221

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:291
error waiting for daemon pod to start
Expected error:
    <*errors.errorString | 0xc4203a1250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:274

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:54
Expected error:
    <*errors.errorString | 0xc421a9c7f0>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:409

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  4 02:18:51.893: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-m4hm:
 container "kubelet": expected RSS memory (MB) < 104857600; got 108228608
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc4203a1250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:65
Expected error:
    <*errors.errorString | 0xc421aa07f0>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:396

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:296
Expected error:
    <*errors.errorString | 0xc42121a030>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:289

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:125
Expected error:
    <*errors.errorString | 0xc420b70dc0>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.008786296s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.008786296s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:81

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:129
error waiting for daemon pod to start
Expected error:
    <*errors.errorString | 0xc4203a1250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:118

Issues about this test specifically: #31428

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:159
Expected error:
    <*errors.errorString | 0xc4214c2110>: {
        s: "timeout waiting 5m0s for appropriate cluster size",
    }
    timeout waiting 5m0s for appropriate cluster size
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:139

Issues about this test specifically: #36457

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc421a9cdb0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  4 01:47:59.254: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:44
Expected error:
    <*errors.errorString | 0xc420d50f60>: {
        s: "Only 4 pods started out of 5",
    }
    Only 4 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:402

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: DiffResources {e2e.go}

Error: 3 leaked resources
[ firewall-rules ]
+k8s-fw-abc74b78f00db11e7be6c42010af0000  bootstrap-e2e  0.0.0.0/1,128.0.0.0/1  tcp:29999                               bootstrap-e2e-minion
[ forwarding-rules ]
+abc74b78f00db11e7be6c42010af0000  us-central1  146.148.101.203  TCP          us-central1/targetPools/abc74b78f00db11e7be6c42010af0000
[ target-pools ]
+abc74b78f00db11e7be6c42010af0000  us-central1                            abc74b78f00db11e7be6c42010af0000

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/745/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes [Volume][Serial] [k8s.io] PersistentVolumes:GCEPD should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:394
Expected error:
    <*errors.errorString | 0xc4203bd1e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:389

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  4 08:44:26.029: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  4 09:17:16.009: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-f4k8:
 container "kubelet": expected RSS memory (MB) < 104857600; got 107503616
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421962060>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc421989050>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  4 07:17:51.501: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/746/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc420e32e30>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421888130>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:451
Expected
    <bool>: false
to equal
    <bool>: true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:5082

Issues about this test specifically: #31918

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  4 14:26:59.004: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  4 14:58:40.760: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-jwc6:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105877504
node bootstrap-e2e-minion-group-f9dk:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105824256
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/747/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc42124a1b0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc420f58070>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  4 17:45:28.788: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/748/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc421729410>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  4 23:57:50.154: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-lhg5:
 container "kubelet": expected RSS memory (MB) < 104857600; got 104878080
node bootstrap-e2e-minion-group-mbfj:
 container "kubelet": expected RSS memory (MB) < 104857600; got 107655168
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421b5e060>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  5 00:41:51.775: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc421b5e500>: {
        s: "2 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                       NODE                            PHASE   GRACE CONDITIONS\nkube-dns-2233971047-bc63p bootstrap-e2e-minion-group-lhg5 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-04 23:04:21 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-05 01:45:47 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-04 23:04:21 -0800 PST  }]\nkube-dns-2233971047-g8zw1 bootstrap-e2e-minion-group-mbfj Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-04 23:26:59 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-05 01:45:49 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-04 23:26:59 -0800 PST  }]\n",
    }
    2 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                       NODE                            PHASE   GRACE CONDITIONS
    kube-dns-2233971047-bc63p bootstrap-e2e-minion-group-lhg5 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-04 23:04:21 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-05 01:45:47 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-04 23:04:21 -0800 PST  }]
    kube-dns-2233971047-g8zw1 bootstrap-e2e-minion-group-mbfj Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-04 23:26:59 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-05 01:45:49 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-04 23:26:59 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  4 21:23:12.670: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/749/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <*errors.errorString | 0xc422dc5ac0>: {
        s: "4 containers failed which is more than allowed 3",
    }
    4 containers failed which is more than allowed 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:79

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421dc6050>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc42165b3b0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc4207e4730>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                       NODE                            PHASE   GRACE CONDITIONS\nkube-dns-2233971047-l4htd bootstrap-e2e-minion-group-zbxs Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-05 03:49:34 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-05 04:15:21 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-05 03:49:34 -0800 PST  }]\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                       NODE                            PHASE   GRACE CONDITIONS
    kube-dns-2233971047-l4htd bootstrap-e2e-minion-group-zbxs Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-05 03:49:34 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-05 04:15:21 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-05 03:49:34 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  5 04:24:06.490: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  5 04:39:07.478: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  5 06:27:17.769: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/750/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  5 09:23:35.578: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421a020a0>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:125
Mar  5 09:51:59.799: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:94

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc421fe5320>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  5 11:59:20.187: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-gcbm:
 container "kubelet": expected RSS memory (MB) < 104857600; got 106299392
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/751/
Multiple broken tests:

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc42087e0d0>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc42131e170>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Expected error:
    <*errors.errorString | 0xc421659440>: {
        s: "4 containers failed which is more than allowed 3",
    }
    4 containers failed which is more than allowed 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:79

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  5 17:25:57.172: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/752/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421d16120>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc4203adbb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  5 19:35:09.514: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc421d2d090>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  5 20:52:30.207: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  5 21:02:01.554: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  5 22:19:59.676: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-84s1:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105283584
node bootstrap-e2e-minion-group-w60b:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105783296
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/753/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc420b6ee30>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                       NODE                            PHASE   GRACE CONDITIONS\nkube-dns-2233971047-dbgg3 bootstrap-e2e-minion-group-sh51 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 00:01:29 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-06 02:39:51 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 00:01:29 -0800 PST  }]\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                       NODE                            PHASE   GRACE CONDITIONS
    kube-dns-2233971047-dbgg3 bootstrap-e2e-minion-group-sh51 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 00:01:29 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-06 02:39:51 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 00:01:29 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc420f8a630>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                       NODE                            PHASE   GRACE CONDITIONS\nkube-dns-2233971047-dbgg3 bootstrap-e2e-minion-group-sh51 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 00:01:29 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-06 02:39:51 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 00:01:29 -0800 PST  }]\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                       NODE                            PHASE   GRACE CONDITIONS
    kube-dns-2233971047-dbgg3 bootstrap-e2e-minion-group-sh51 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 00:01:29 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-06 02:39:51 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 00:01:29 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Issues about this test specifically: #34223

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  6 03:01:29.401: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  6 03:58:10.523: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  5 23:55:08.018: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-xlj8:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105816064
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc4203fd900>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc420fda550>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/754/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  6 09:30:34.427: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-kh1g:
 container "kubelet": expected RSS memory (MB) < 104857600; got 111284224
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc42019b760>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                       NODE                            PHASE   GRACE CONDITIONS\nkube-dns-2233971047-c11wv bootstrap-e2e-minion-group-kh1g Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 04:42:33 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-06 05:18:20 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 04:42:33 -0800 PST  }]\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                       NODE                            PHASE   GRACE CONDITIONS
    kube-dns-2233971047-c11wv bootstrap-e2e-minion-group-kh1g Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 04:42:33 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-06 05:18:20 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-06 04:42:33 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Issues about this test specifically: #28019

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  6 05:30:36.497: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc4223292d0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421b8c080>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  6 08:50:33.060: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:125
Expected error:
    <*errors.errorString | 0xc421a39cc0>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.012920112s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 2 (20.012920112s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:81

Issues about this test specifically: #26744 #26929 #38552

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/755/
Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ disks ]
+NAME                                                ZONE           SIZE_GB  TYPE    STATUS
+bootstrap-e2e-c53a6235-02be-11e7-8a92-0242ac110009  us-central1-f  10       pd-ssd  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc421ef3120>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421818700>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  6 10:01:23.144: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  6 10:51:57.784: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  6 11:28:47.598: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-c6d2:
 container "kubelet": expected RSS memory (MB) < 104857600; got 104943616
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/756/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc420375a50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  6 18:08:49.645: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-6zv6:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105861120
node bootstrap-e2e-minion-group-rb7c:
 container "kubelet": expected RSS memory (MB) < 104857600; got 106082304
node bootstrap-e2e-minion-group-xj7m:
 container "kubelet": expected RSS memory (MB) < 104857600; got 111169536
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc42256bd10>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  6 19:46:31.542: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/757/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:334
Expected
    <bool>: false
to equal
    <bool>: true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:5082

Issues about this test specifically: #28019

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  7 00:19:44.424: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-ldcg:
 container "kubelet": expected RSS memory (MB) < 104857600; got 104894464
node bootstrap-e2e-minion-group-tlrf:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105037824
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  7 00:35:23.591: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:363
Expected error:
    <*errors.errorString | 0xc422418920>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:356

Issues about this test specifically: #37479

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  6 21:40:55.247: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/758/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  7 06:00:48.185: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:236
Expected
    <bool>: false
to equal
    <bool>: true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:5082

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc4203acc70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  7 04:21:14.355: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-qqc2:
 container "kubelet": expected RSS memory (MB) < 104857600; got 109047808
node bootstrap-e2e-minion-group-zczf:
 container "kubelet": expected RSS memory (MB) < 104857600; got 106668032
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  7 04:34:07.792: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/759/
Multiple broken tests:

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421ba6100>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  7 11:20:34.636: Memory usage exceeding limits:
 node bootstrap-e2e-minion-group-qsk9:
 container "kubelet": expected RSS memory (MB) < 104857600; got 110514176
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc420407fd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  7 10:28:09.364: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/760/
Multiple broken tests:

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:125
Mar  7 13:00:02.519: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:94

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  7 16:31:00.617: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  7 16:49:00.452: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Expected
    <bool>: false
to equal
    <bool>: true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:5076

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/763/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  8 04:19:44.070: CPU usage exceeding limits:
 node bootstrap-e2e-minion-group-zzp8:
 container "kubelet": expected 95th% usage < 0.200; got 0.255
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:189

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Mar  8 05:00:20.679: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  8 02:00:47.862: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/764/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  8 09:30:01.001: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc4211f6860>: {
        s: "1 / 28 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                            PHASE   GRACE CONDITIONS\nkube-dns-806549836-3blnz bootstrap-e2e-minion-group-xmhx Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:38:58 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:45:54 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:38:58 -0800 PST  }]\n",
    }
    1 / 28 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                            PHASE   GRACE CONDITIONS
    kube-dns-806549836-3blnz bootstrap-e2e-minion-group-xmhx Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:38:58 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:45:54 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:38:58 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc4215cb730>: {
        s: "2 / 28 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                            PHASE   GRACE CONDITIONS\nfluentd-gcp-v2.0-hnfjd   bootstrap-e2e-minion-group-h642 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:28:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-08 07:00:31 -0800 PST ContainersNotReady containers with unready status: [fluentd-gcp]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:28:47 -0800 PST  }]\nkube-dns-806549836-3blnz bootstrap-e2e-minion-group-xmhx Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:38:58 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:45:54 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:38:58 -0800 PST  }]\n",
    }
    2 / 28 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                            PHASE   GRACE CONDITIONS
    fluentd-gcp-v2.0-hnfjd   bootstrap-e2e-minion-group-h642 Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:28:46 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-08 07:00:31 -0800 PST ContainersNotReady containers with unready status: [fluentd-gcp]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:28:47 -0800 PST  }]
    kube-dns-806549836-3blnz bootstrap-e2e-minion-group-xmhx Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:38:58 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:45:54 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-08 06:38:58 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc4218dc070>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/765/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc42199e070>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Mar  8 13:35:40.077: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:421
Mar  8 15:03:24.226: error restarting apiserver: couldn't restart apiserver: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:401

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/766/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:319
Expected error:
    <*errors.errorString | 0xc4203bd1e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:287

Issues about this test specifically: #37259

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc4203bd1e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:216
Expected
    <bool>: false
to equal
    <bool>: true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:5080

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/769/
Multiple broken tests:

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc4219ee1f0>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/firewall.go:116
Expected error:
    <*errors.errorString | 0xc4219e8d10>: {
        s: "error waiting for expectedHosts: map[bootstrap-e2e-minion-group-qlv5:{} bootstrap-e2e-minion-group-zs4k:{}], hittedHosts: map[bootstrap-e2e-minion-group-zs4k:{} bootstrap-e2e-minion-group-qlv5:{}], count: 0, expected count: 15",
    }
    error waiting for expectedHosts: map[bootstrap-e2e-minion-group-qlv5:{} bootstrap-e2e-minion-group-zs4k:{}], hittedHosts: map[bootstrap-e2e-minion-group-zs4k:{} bootstrap-e2e-minion-group-qlv5:{}], count: 0, expected count: 15
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/firewall.go:115

Failed: [k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:391
error waiting for daemon pod template generation to be 1
Expected error:
    <*errors.errorString | 0xc4203c0fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:386

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:217
Expected error:
    <*errors.errorString | 0xc4203c0fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:125

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/772/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc421899f30>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                            PHASE   GRACE CONDITIONS\nkube-dns-806549836-lmbhw bootstrap-e2e-minion-group-nvlw Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-10 00:39:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-10 02:07:15 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-10 00:39:19 -0800 PST  }]\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                            PHASE   GRACE CONDITIONS
    kube-dns-806549836-lmbhw bootstrap-e2e-minion-group-nvlw Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-10 00:39:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-10 02:07:15 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-10 00:39:19 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc421899ee0>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                            PHASE   GRACE CONDITIONS\nkube-dns-806549836-lmbhw bootstrap-e2e-minion-group-nvlw Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-10 00:39:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-10 02:07:15 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-10 00:39:19 -0800 PST  }]\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                            PHASE   GRACE CONDITIONS
    kube-dns-806549836-lmbhw bootstrap-e2e-minion-group-nvlw Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-10 00:39:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-10 02:07:15 -0800 PST ContainersNotReady containers with unready status: [kubedns dnsmasq]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-10 00:39:19 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Issues about this test specifically: #34223

Failed: [k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:391
error waiting for daemon pod template generation to be 1
Expected error:
    <*errors.errorString | 0xc42043de70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:386

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/778/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc420c7e0a0>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                            PHASE   GRACE CONDITIONS\nkube-dns-806549836-lth0z bootstrap-e2e-minion-group-vp7t Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-11 08:41:33 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-11 12:42:52 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-11 08:41:33 -0800 PST  }]\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                            PHASE   GRACE CONDITIONS
    kube-dns-806549836-lth0z bootstrap-e2e-minion-group-vp7t Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-11 08:41:33 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-11 12:42:52 -0800 PST ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-11 08:41:33 -0800 PST  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Issues about this test specifically: #36914

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:391
error waiting for daemon pod template generation to be 1
Expected error:
    <*errors.errorString | 0xc4203fce20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:386

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421efa0b0>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/782/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc420df2050>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:102
Expected error:
    <*errors.errorString | 0xc420d935b0>: {
        s: "1 / 29 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                      NODE                            PHASE   GRACE CONDITIONS\nkube-dns-806549836-5fcfs bootstrap-e2e-minion-group-kxvh Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 07:00:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 08:48:44 -0700 PDT ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 07:00:03 -0700 PDT  }]\n",
    }
    1 / 29 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                      NODE                            PHASE   GRACE CONDITIONS
    kube-dns-806549836-5fcfs bootstrap-e2e-minion-group-kxvh Running       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 07:00:03 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 08:48:44 -0700 PDT ContainersNotReady containers with unready status: [kubedns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 07:00:03 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:95

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:391
error waiting for daemon pod template generation to be 1
Expected error:
    <*errors.errorString | 0xc4203a1090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:386

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/784/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: TearDown {e2e.go}

signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421c98040>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:391
error waiting for daemon pod template generation to be 1
Expected error:
    <*errors.errorString | 0xc420415e80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:386

Failed: DiffResources {e2e.go}

Error: 16 leaked resources
[ instances ]
+NAME                  ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+bootstrap-e2e-master  us-central1-f  n1-standard-1               10.240.0.2   104.197.244.132  STOPPING
[ disks ]
+NAME                     ZONE           SIZE_GB  TYPE         STATUS
+bootstrap-e2e-master     us-central1-f  20       pd-standard  READY
+bootstrap-e2e-master-pd  us-central1-f  20       pd-ssd       READY
[ addresses ]
+NAME                     REGION       ADDRESS          STATUS
+bootstrap-e2e-master-ip  us-central1  104.197.244.132  IN_USE
[ routes ]
+bootstrap-e2e-038aa872-0786-11e7-9a70-42010af00002  bootstrap-e2e           10.180.0.0/24  us-central1-f/instances/bootstrap-e2e-master  1000
[ routes ]
+default-route-362ecc865acefbc9                      bootstrap-e2e           0.0.0.0/0      default-internet-gateway                      1000
[ routes ]
+default-route-4d22d1ad0e02ea7e                      bootstrap-e2e           10.240.0.0/16                                                1000
[ firewall-rules ]
+bootstrap-e2e-default-internal-master  bootstrap-e2e  10.0.0.0/8     tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp                        bootstrap-e2e-master
+bootstrap-e2e-default-internal-node    bootstrap-e2e  10.0.0.0/8     tcp:1-65535,udp:1-65535,icmp                                      bootstrap-e2e-minion
+bootstrap-e2e-default-ssh              bootstrap-e2e  0.0.0.0/0      tcp:22
+bootstrap-e2e-master-etcd              bootstrap-e2e                 tcp:2380,tcp:2381                           bootstrap-e2e-master  bootstrap-e2e-master
+bootstrap-e2e-master-https             bootstrap-e2e  0.0.0.0/0      tcp:443                                                           bootstrap-e2e-master
+bootstrap-e2e-minion-all               bootstrap-e2e  10.180.0.0/14  tcp,udp,icmp,esp,ah,sctp                                          bootstrap-e2e-minion

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510

@calebamiles
Copy link
Contributor

Closing this issue due to pollution from unrelated test failures

cc: @ethernetdan, @kubernetes/release-team, @kubernetes/test-infra-maintainers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

3 participants