Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-test: broken test run #40787

Closed
k8s-github-robot opened this issue Feb 1, 2017 · 39 comments
Closed

ci-kubernetes-e2e-gci-gke-test: broken test run #40787

k8s-github-robot opened this issue Feb 1, 2017 · 39 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/294/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42238ccc0>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc4203accd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected error:
    <*errors.errorString | 0xc4203accd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:73

Issues about this test specifically: #28346

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc423300120>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Expected error:
    <*errors.errorString | 0xc4203accd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #32087

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc4229c23f0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc42187e140>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc4215b5000>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc4203accd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421b701a0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc4225d8fa0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 31 19:49:42.703: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:437
Expected error:
    <*errors.errorString | 0xc4203accd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #33985

Previous issues for this suite: #37522 #38580 #39211

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Feb 1, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/300/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc42038cec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Feb  2 16:22:22.344: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Feb  2 20:11:35.892: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Feb  2 15:40:23.520: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc42038cec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc42038cec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #28337

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc42038cec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc42038cec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:74
Expected error:
    <*errors.errorString | 0xc4232742d0>: {
        s: "want pod 'test-webserver-da05a836-e9c2-11e6-9709-0242ac110006' on 'gke-bootstrap-e2e-default-pool-202e718d-8h2x' to be 'Running' but was 'Pending'",
    }
    want pod 'test-webserver-da05a836-e9c2-11e6-9709-0242ac110006' on 'gke-bootstrap-e2e-default-pool-202e718d-8h2x' to be 'Running' but was 'Pending'
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:56

Issues about this test specifically: #29521

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc42118d930>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 10, less than the min required: 18",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 10, less than the min required: 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1502/
Multiple broken tests:

Failed: TearDown {e2e.go}

signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ routes ]
+default-route-ad3cc85a96b2c832  bootstrap-e2e  0.0.0.0/0      default-internet-gateway  1000
+default-route-bcb9c1266b93c3b9  bootstrap-e2e  10.240.0.0/16                            1000

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.errorString | 0xc4203d2f60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:143

Issues about this test specifically: #32639

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.errorString | 0xc4203d2f60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:143

Issues about this test specifically: #32646

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@calebamiles calebamiles modified the milestone: v1.6 Mar 3, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1515/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  3 16:18:01.436: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420139400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  3 15:37:55.648: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219fea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  3 15:58:31.621: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422caca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  3 16:14:48.063: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42240aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428 #40256

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ disks ]
+NAME                                                             ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-858e-pvc-f016cda6-0063-11e7-aa46-42010af00019  us-central1-f  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Mar  3 15:06:39.103: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  3 16:01:44.758: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221dca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27245

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  3 16:08:25.981: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217bc000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  3 16:04:58.654: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fbb400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  3 15:41:13.418: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42176e000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29831

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1522/
Multiple broken tests:

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:152
Expected error:
    <*errors.errorString | 0xc421da6fb0>: {
        s: "gave up waiting for pod 'pvc-volume-tester-2f6ch' to be 'success or failure' after 15m0s",
    }
    gave up waiting for pod 'pvc-volume-tester-2f6ch' to be 'success or failure' after 15m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:232

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:99
Expected
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:84

Issues about this test specifically: #31936

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
waiting for tester pod to start
Expected error:
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:110

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:189
Expected error:
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:176

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:161
Failed waiting for pod wrapped-volume-race-ec36505c-019f-11e7-ae55-0242ac110006-44qg5 to enter running state
Expected error:
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:383

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:83
Expected error:
    <*errors.errorString | 0xc422c493f0>: {
        s: "expected pod \"downward-api-b7180007-01a9-11e7-ae55-0242ac110006\" success: gave up waiting for pod 'downward-api-b7180007-01a9-11e7-ae55-0242ac110006' to be 'success or failure' after 5m0s",
    }
    expected pod "downward-api-b7180007-01a9-11e7-ae55-0242ac110006" success: gave up waiting for pod 'downward-api-b7180007-01a9-11e7-ae55-0242ac110006' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1730

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:564
Expected error:
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30263

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1749

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:70
Expected error:
    <*errors.errorString | 0xc4214ad8c0>: {
        s: "expected pod \"var-expansion-1d737ae7-01a3-11e7-ae55-0242ac110006\" success: gave up waiting for pod 'var-expansion-1d737ae7-01a3-11e7-ae55-0242ac110006' to be 'success or failure' after 5m0s",
    }
    expected pod "var-expansion-1d737ae7-01a3-11e7-ae55-0242ac110006" success: gave up waiting for pod 'var-expansion-1d737ae7-01a3-11e7-ae55-0242ac110006' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29461

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:59
Expected error:
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:58

Issues about this test specifically: #31938

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:162
Expected error:
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:149

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1730

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:604
Expected error:
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #27360 #28096 #29615 #31775 #35750

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:271
Expected
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:260

Issues about this test specifically: #31408

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1730

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
Expected
    <*errors.errorString | 0xc42039edf0>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:147

Issues about this test specifically: #31873

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1525/
Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:220
Mar  6 05:47:34.089: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:184

Issues about this test specifically: #26955

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:287
Mar  6 02:16:04.956: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:229

Issues about this test specifically: #27680 #38211

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: TearDown {e2e.go}

signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ routes ]
+default-route-55f367ba7a411006  bootstrap-e2e  10.240.0.0/16                            1000
[ routes ]
+default-route-c1a1e8b09cd1a405  bootstrap-e2e  0.0.0.0/0      default-internet-gateway  1000

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1537/
Multiple broken tests:

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Expected error:
    <*errors.errorString | 0xc4203aac20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #32087

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42246a850>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422c4a100>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected error:
    <*errors.errorString | 0xc4203aac20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:73

Issues about this test specifically: #28346

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc423458fb0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422480000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc4228de000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1542/
Multiple broken tests:

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421beff80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-secrets-vjk7d/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-vjk7d/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-vjk7d/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #37525

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 11 11:12:44.838: Couldn't delete ns: "e2e-tests-deployment-sj0g1": unable to retrieve the complete list of server APIs: autoscaling/v1: an error on the server ("Internal Server Error: \"/apis/autoscaling/v1\"") has prevented the request from succeeding (&discovery.ErrGroupDiscoveryFailed{Groups:map[unversioned.GroupVersion]error{unversioned.GroupVersion{Group:"autoscaling", Version:"v1"}:(*errors.StatusError)(0xc42219c480)}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #34687 #38442

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc42038ed10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33285

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:221
Mar 11 11:19:17.411: Unexpected error getting {batch v2alpha1 scheduledjobs}: an error on the server ("Internal Server Error: \"/apis/batch/v2alpha1/namespaces/e2e-tests-kubectl-jmznj/scheduledjobs\"") has prevented the request from succeeding (get scheduledjobs.batch)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:370

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4223a3590>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422688a10>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.205.61 --kubeconfig=/workspace/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-kkc0m] []  <nil>  Error from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-kkc0m/pods/e2e-test-nginx-pod\\\"\") has prevented the request from succeeding (delete pods e2e-test-nginx-pod)\n [] <nil> 0xc422227740 exit status 1 <nil> <nil> true [0xc4201a0280 0xc4201a02a8 0xc4201a02d8] [0xc4201a0280 0xc4201a02a8 0xc4201a02d8] [0xc4201a02a0 0xc4201a02d0] [0x9731f0 0x9731f0] 0xc422746ae0 <nil>}:\nCommand stdout:\n\nstderr:\nError from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-kkc0m/pods/e2e-test-nginx-pod\\\"\") has prevented the request from succeeding (delete pods e2e-test-nginx-pod)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.205.61 --kubeconfig=/workspace/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-kkc0m] []  <nil>  Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-kkc0m/pods/e2e-test-nginx-pod\"") has prevented the request from succeeding (delete pods e2e-test-nginx-pod)
     [] <nil> 0xc422227740 exit status 1 <nil> <nil> true [0xc4201a0280 0xc4201a02a8 0xc4201a02d8] [0xc4201a0280 0xc4201a02a8 0xc4201a02d8] [0xc4201a02a0 0xc4201a02d0] [0x9731f0 0x9731f0] 0xc422746ae0 <nil>}:
    Command stdout:
    
    stderr:
    Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-kkc0m/pods/e2e-test-nginx-pod\"") has prevented the request from succeeding (delete pods e2e-test-nginx-pod)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2067

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213c7310>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:219
Expected error:
    <*errors.StatusError | 0xc42219cd80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nsdeletetest-8jsd0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nsdeletetest-8jsd0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nsdeletetest-8jsd0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:136

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc42038ed10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #34250

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.errorString | 0xc42038ed10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:306

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422c191b0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d35c30>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 11 11:11:41.261: Couldn't delete ns: "e2e-tests-init-container-0198c": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-init-container-0198c\"") has prevented the request from succeeding (delete namespaces e2e-tests-init-container-0198c) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-init-container-0198c\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-init-container-0198c)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc42212d220), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32054 #36010

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422c16970>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42276c380>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 11 11:14:23.627: Couldn't delete ns: "e2e-tests-e2e-privilegedpod-8rmx9": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-e2e-privilegedpod-8rmx9/jobs\"") has prevented the request from succeeding (get jobs.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-e2e-privilegedpod-8rmx9/jobs\\\"\") has prevented the request from succeeding (get jobs.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc42275d8b0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc42038ed10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #34104

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 11 11:15:19.677: Couldn't delete ns: "e2e-tests-containers-hsl4z": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-containers-hsl4z/jobs\"") has prevented the request from succeeding (get jobs.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-containers-hsl4z/jobs\\\"\") has prevented the request from succeeding (get jobs.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc4226c3860), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #36706

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4233dc950>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc42038ed10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc42038ed10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33887

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Expected error:
    <*errors.errorString | 0xc42038ed10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #34064

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:808
Mar 11 10:27:27.056: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2649

Issues about this test specifically: #26134

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421408840>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422d887c0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422201510>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc42038ed10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 11 11:18:40.286: Couldn't delete ns: "e2e-tests-services-j5p87": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-j5p87/replicationcontrollers\"") has prevented the request from succeeding (get replicationcontrollers) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-services-j5p87/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationcontrollers)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc4226e6af0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #38174

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc42038ed10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422ad1e90>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-z86k9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:268
Expected error:
    <*errors.StatusError | 0xc42136a680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-12gnn/replicationcontrollers\\\"\") has prevented the request from succeeding (post replicationcontrollers)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "replicationcontrollers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-12gnn/replicationcontrollers\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-12gnn/replicationcontrollers\"") has prevented the request from succeeding (post replicationcontrollers)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:252

Issues about this test specifically: #34372

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42038ed10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32375

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1544/
Multiple broken tests:

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:38:47.038: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cb2000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26191

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203acca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 05:51:47.953: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421814000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 07:33:06.742: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421910000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:41:58.186: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a17400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 07:23:53.684: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e37400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 07:45:01.991: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f73400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:48:20.958: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420971400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:24:33.248: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42287aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:31:03.694: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225b6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:14:57.265: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b4ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc421502100>: {
        s: "service verification failed for: 10.75.245.130\nexpected [service1-4lhc7 service1-jsbn1 service1-xs8db]\nreceived []",
    }
    service verification failed for: 10.75.245.130
    expected [service1-4lhc7 service1-jsbn1 service1-xs8db]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:56:03.714: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422cf9400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38439

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420983b40>: {
        s: "5 / 14 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-7e944621-67p8 gke-bootstrap-e2e-default-pool-7e944621-67p8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 01:21:28 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 01:22:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:15:05 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-zmh23                               gke-bootstrap-e2e-default-pool-7e944621-67p8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:39:01 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-7e944621-67p8            gke-bootstrap-e2e-default-pool-7e944621-67p8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 01:21:28 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 01:21:29 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:15:05 -0700 PDT  }]\nkubernetes-dashboard-3543765157-xnjh8                              gke-bootstrap-e2e-default-pool-7e944621-67p8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:39:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  }]\nl7-default-backend-2234341178-jrkn8                                gke-bootstrap-e2e-default-pool-7e944621-67p8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:39:10 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  }]\n",
    }
    5 / 14 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-7e944621-67p8 gke-bootstrap-e2e-default-pool-7e944621-67p8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 01:21:28 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 01:22:10 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:15:05 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-zmh23                               gke-bootstrap-e2e-default-pool-7e944621-67p8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:39:01 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-7e944621-67p8            gke-bootstrap-e2e-default-pool-7e944621-67p8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 01:21:28 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 01:21:29 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:15:05 -0700 PDT  }]
    kubernetes-dashboard-3543765157-xnjh8                              gke-bootstrap-e2e-default-pool-7e944621-67p8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:39:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  }]
    l7-default-backend-2234341178-jrkn8                                gke-bootstrap-e2e-default-pool-7e944621-67p8 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:39:10 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-12 04:38:59 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:35:28.447: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422875400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:21:19.804: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c97400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29710

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:09:42.939: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219a5400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 07:29:55.174: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422c91400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc422eda390>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-7e944621-67p8 boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-7e944621-67p8 boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:01:21.527: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42177ea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc422bf9c00>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:27:48.385: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421750000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:04:32.779: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224c6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36242

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 05:44:54.073: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225fc000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 12 06:45:04.614: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422b26a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31575 #32756

@bowei
Copy link
Member

bowei commented Mar 13, 2017

nodes are not ready

@davidopp davidopp added the sig/testing Categorizes an issue or PR as relevant to SIG Testing. label Mar 13, 2017
@fejta fejta added sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. and removed sig/testing Categorizes an issue or PR as relevant to SIG Testing. team/test-infra labels Mar 13, 2017
@fejta fejta assigned mikedanese and unassigned rmmh Mar 13, 2017
@fejta
Copy link
Contributor

fejta commented Mar 13, 2017

Assigning to cluster lifecycle per bowei's comments

@roberthbailey
Copy link
Contributor

The recent failures are the same cause as #42934 which @nikhiljindal's change should fix shortly. Then we can see if this is still flaking.

@ethernetdan
Copy link
Contributor

@roberthbailey @nikhiljindal is this a blocker for 1.6?

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1591/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422017b30>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217d6790>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-25r04 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-25r04 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Mar 14 17:08:16.403: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224906b0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Mar 14 16:06:17.158: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421366a30>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-25r04 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-25r04 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc42084aa50>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-25r04 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-25r04 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Mar 14 19:02:10.950: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d60fe0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Mar 14 20:35:21.713: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42145a1b0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-25r04 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-25r04 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421706390>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Mar 14 15:57:09.525: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211b06f0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-25r04 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-25r04 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422207110>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Mar 14 17:31:47.343: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f10780>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217c2f20>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215854d0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b833b0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                 NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-25r04  gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]\nl7-default-backend-2234341178-f47r5 gke-bootstrap-e2e-default-pool-f022ad5c-94ww Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:11 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:11 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                 NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-25r04  gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:19 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:00 -0700 PDT  }]
    l7-default-backend-2234341178-f47r5 gke-bootstrap-e2e-default-pool-f022ad5c-94ww Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:11 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 12:59:11 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422584370>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4206960e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-599s3 gke-bootstrap-e2e-default-pool-f022ad5c-vmx9 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-14 14:56:54 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

@roberthbailey
Copy link
Contributor

@ethernetdan -- this looks like failures for tests on the master branch. Shouldn't we only be blocking 1.6 based on tests running against the 1.6 branch?

@roberthbailey
Copy link
Contributor

The most recent run of this test passed. Looking at the test history (before the recent streak of 5m failures that should be ignored), this test suite either seems to pass or have a huge number of tests all fail (it isn't single flakes). My guess is that something is causing a failure which then cascades through the rest of the test run.

@ethernetdan
Copy link
Contributor

@roberthbailey we have been fast forwarding onto the 1.6 branch several times a day so they should be relatively similar

@ethernetdan
Copy link
Contributor

A couple flakes but otherwise seems stable, I'll keep an eye on this suite.

@ethernetdan ethernetdan modified the milestones: v1.6.1, v1.6 Mar 16, 2017
@roberthbailey
Copy link
Contributor

I think that this test suite is pinned against a static version and not tracking either the master branch or a release branch. So it shouldn't be evaluated when looking at whether we should cut a release.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1602/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421aec500>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Expected error:
    <*errors.errorString | 0xc42137c0b0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42235c1a0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42149c0a0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c914a0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:246
Mar 18 04:15:50.298: Pods on node gke-bootstrap-e2e-default-pool-164a239d-7v1d are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:182

Issues about this test specifically: #36794

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc4203d4ff0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33285

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Mar 18 04:32:00.654: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:808
Mar 18 05:36:55.498: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2649

Issues about this test specifically: #26134 #43340

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203d4ff0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Mar 18 07:24:45.991: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc421f12100>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Mar 18 06:23:04.325: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216846c0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Mar 18 08:48:32.732: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Mar 18 04:11:53.419: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1582

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Mar 18 10:43:50.760: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421898650>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421380760>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e43c20>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221d9740>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212ae3d0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*errors.errorString | 0xc4203d4ff0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221f2780>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Mar 18 04:00:22.495: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421640100>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216dd330>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                  NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]\nkube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                  NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-0vk54   gke-bootstrap-e2e-default-pool-164a239d-w1lq Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:28 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:10 -0700 PDT  }]
    kube-dns-autoscaler-2715466192-r3r80 gke-bootstrap-e2e-default-pool-164a239d-rchg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:36:02 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-18 03:35:53 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1604/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 21:11:14.126: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420212ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37500

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 20:48:06.298: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f5b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ disks ]
+NAME                                                             ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-6652-pvc-2f4d81a9-0c50-11e7-b2c5-42010af00003  us-central1-f  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 20:51:19.667: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e144f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 21:22:34.171: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420212ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1087
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.163.85 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-164q1] []  <nil> Created e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15\nScaling up e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15 up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc420c1cf30 exit status 1 <nil> <nil> true [0xc42018add8 0xc42018adf0 0xc42018ae18] [0xc42018add8 0xc42018adf0 0xc42018ae18] [0xc42018ade8 0xc42018ae08] [0x973730 0x973730] 0xc4211d89c0 <nil>}:\nCommand stdout:\nCreated e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15\nScaling up e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15 up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.163.85 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-164q1] []  <nil> Created e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15
    Scaling up e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15 up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc420c1cf30 exit status 1 <nil> <nil> true [0xc42018add8 0xc42018adf0 0xc42018ae18] [0xc42018add8 0xc42018adf0 0xc42018ae18] [0xc42018ade8 0xc42018ae08] [0x973730 0x973730] 0xc4211d89c0 <nil>}:
    Command stdout:
    Created e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15
    Scaling up e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-b714fd4a46a9139c3c30afb90febac15 up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:169

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc42038cd90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34317

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 21:34:26.995: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a7cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 22:41:00.185: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215beef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 22:01:36.894: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a20ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc4216f8000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 20:54:51.261: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42160d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 22:21:21.206: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ec58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 23:31:39.219: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4207098f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 21:30:24.639: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b398f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc42038cd90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32830

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 22:25:42.468: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421af84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 21:08:00.798: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d878f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29050

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 22:04:50.420: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42063d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 21:50:52.877: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216e24f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27079

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 23:19:53.450: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e3aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 21:58:10.873: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421626ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31408

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 21:01:36.120: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fdd8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 23:16:38.921: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209fcef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35297

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 22:56:31.485: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f9c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36950

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 21:27:13.252: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211b04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 21:54:11.670: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42130eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 18 22:29:26.945: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206d24f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32087

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Mar 18 20:16:58.719: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #26324 #27715 #28845

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1605/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc42038c810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Mar 19 08:03:47.334: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1995

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc42038c810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #28337

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Mar 19 09:19:04.058: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1995

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Mar 19 10:28:23.724: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1995

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:66
Expected error:
    <*errors.errorString | 0xc42038c810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:283

Issues about this test specifically: #37144

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc42011dba0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 15, less than the min required: 18",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 15, less than the min required: 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc42038c810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1608/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422672c00>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4226721e0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422ac2700>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4236e0340>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203aadf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42304a0b0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4229ca210>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d06820>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422593990>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423ae6bc0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*errors.StatusError | 0xc4223d4d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server cannot complete the requested operation at this time, try again later (get replicationcontrollers rc)",
            Reason: "ServerTimeout",
            Details: {
                Name: "rc",
                Group: "",
                Kind: "replicationcontrollers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The  operation against  could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 504,
        },
    }
    the server cannot complete the requested operation at this time, try again later (get replicationcontrollers rc)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:250

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4201ff9d0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4233dda40>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4232f3a10>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4232f3ae0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4233f1840>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-dhgr2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1612/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:71
Waiting for pods in namespace "e2e-tests-disruption-5v6qt" to be ready
Expected error:
    <*errors.errorString | 0xc4203d3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:247

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Waiting for pods in namespace "e2e-tests-disruption-06r5b" to be ready
Expected error:
    <*errors.errorString | 0xc4203d3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:247

Issues about this test specifically: #32644

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.errorString | 0xc4203d3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:143

Issues about this test specifically: #32639

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Waiting for pods in namespace "e2e-tests-disruption-b8psz" to be ready
Expected error:
    <*errors.errorString | 0xc4203d3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:247

Issues about this test specifically: #32753 #34676

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1621/
Multiple broken tests:

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc4229fb760>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-14813d36-px10 boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-14813d36-px10 boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 15:34:30.928: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42123b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30981

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420378ca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 17:01:42.715: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224ad8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 16:20:09.002: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42163e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 17:18:22.310: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42142b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc420378ca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 16:03:41.703: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216b8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28283

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421357e30>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:20, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63625985503, nsec:0, loc:(*time.Location)(0x3f5f360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63625985503, nsec:0, loc:(*time.Location)(0x3f5f360)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:20, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63625985503, nsec:0, loc:(*time.Location)(0x3f5f360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63625985503, nsec:0, loc:(*time.Location)(0x3f5f360)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 16:45:20.684: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d22ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 16:52:01.883: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42142a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 16:41:04.054: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422b638f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27360 #28096 #29615 #31775 #35750

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 17:08:07.170: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206daef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 17:04:55.952: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4228c0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37529

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 16:48:48.811: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4207724f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 15:57:00.649: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212e04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 16:58:31.325: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42142b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 17:14:33.959: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42248c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420378ca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 16:31:01.878: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422a3aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 24 16:55:15.020: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422106ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1622/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:808
Mar 24 21:33:48.411: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2649

Issues about this test specifically: #26134 #43340

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32375

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:54
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #37502

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421951f90>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                       NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-4101612645-q7hgc gke-bootstrap-e2e-default-pool-3af69380-g9kb Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT ContainersNotReady containers with unready status: [kubedns healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:32 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                       NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-4101612645-q7hgc gke-bootstrap-e2e-default-pool-3af69380-g9kb Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT ContainersNotReady containers with unready status: [kubedns healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:32 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #34250

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #36271

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Mar 24 21:38:25.834: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2649

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc423171e10>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                       NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-4101612645-q7hgc gke-bootstrap-e2e-default-pool-3af69380-g9kb Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:32 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                       NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-4101612645-q7hgc gke-bootstrap-e2e-default-pool-3af69380-g9kb Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:32 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #36178

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42230bdd0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                       NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-4101612645-q7hgc gke-bootstrap-e2e-default-pool-3af69380-g9kb Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:32 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                       NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-4101612645-q7hgc gke-bootstrap-e2e-default-pool-3af69380-g9kb Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:32 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #34317

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc420745b80>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                       NODE                                         PHASE   GRACE CONDITIONS\nkube-dns-4101612645-q7hgc gke-bootstrap-e2e-default-pool-3af69380-g9kb Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:32 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                       NODE                                         PHASE   GRACE CONDITIONS
    kube-dns-4101612645-q7hgc gke-bootstrap-e2e-default-pool-3af69380-g9kb Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:33 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-24 22:38:32 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33887

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #34104

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33285

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc420382bd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33631 #33995 #34970

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1630/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: TearDown {e2e.go}

error during ./hack/e2e-internal/e2e-down.sh: signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc4203acd40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #33285

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc4203acd40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #34317

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:65
Expected error:
    <*errors.errorString | 0xc4203acd40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203acd40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #32436 #37267

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1631/
Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:48:53.886: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421832278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35297

Failed: [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 23:17:44.177: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421813678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36554

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 18:54:11.512: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42187e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc420370d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc4222765f0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 23:14:32.039: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d5d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 23:03:11.331: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421237678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:56:05.701: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42164d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 23:10:42.915: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421670c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 23:42:36.760: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42226cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35601

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:22:54.507: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216dec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 21:37:57.936: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fb3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 17:30:17.926: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220e64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31873

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 21:31:39.453: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a90c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29828

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:59:54.109: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a14278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:16:27.626: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42140ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38083

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 17:26:48.734: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42087b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:19:41.191: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42140c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27079

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:06:52.297: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42227a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 23:56:48.389: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422724278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:11:13.668: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422361678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc4222dfde0>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:308

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 23:07:00.761: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a17678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 18:50:37.057: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220824f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:33:21.966: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f76c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:27:53.987: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421869678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:31:27.096: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a66278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 23:20:59.554: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421868278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc420370d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33887

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:02:02.421: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421434c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420370d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4201a1570>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:07:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0            gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:17 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:07:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0            gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:17 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422094090>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:07:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0            gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:17 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:07:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0            gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:17 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4201f7070>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:07:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0            gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:17 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:07:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-ccd48d78-p7l0            gke-bootstrap-e2e-default-pool-ccd48d78-p7l0 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:17 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-27 16:05:15 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 21:28:23.935: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c00c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:30:08.871: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220b5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 21:58:48.610: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221d0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 21:13:50.250: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42234e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37259

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 21:44:20.797: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42145ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Expected error:
    <*errors.errorString | 0xc421c42000>: {
        s: "at least one node failed to be ready",
    }
    at least one node failed to be ready
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:397

Issues about this test specifically: #37373

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 18:30:59.136: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42145eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 18:58:22.298: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420bb04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 18:16:35.903: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217138f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 23:24:12.771: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220d9678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:52:08.893: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422850278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37274

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 23:52:42.936: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4226c0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30981

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 21:17:16.526: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c02c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32936

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:43:04.842: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224d5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27360 #28096 #29615 #31775 #35750

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 19:58:09.169: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422360c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 19:01:34.789: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220458f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38308

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:04:34.639: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fecc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:42:27.456: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219acc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:26:50.473: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421236278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:24:42.703: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421becc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 21:24:51.556: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cb7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 22:48:55.283: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a17678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:45:42.233: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219e0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:21:34.353: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422235678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:01:22.389: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ceb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37027

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 21:20:49.302: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42131ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38511

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc420370d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:35:57.521: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216de278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:39:10.650: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fcc278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc420370d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36178

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc420370d30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34250

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 27 20:52:56.193: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212d6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1633/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc4203d2f20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc4203d2f20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #36178

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc4203d2f20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #33285

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203d2f20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:65
Expected error:
    <*errors.errorString | 0xc4203d2f20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc4203d2f20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #34104

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203d2f20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #32375

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203d2f20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203d2f20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #32830

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1634/
Multiple broken tests:

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 22:32:41.235: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42217a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33008

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc4221427d0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-65f1734f-pcsw boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-65f1734f-pcsw boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 21:53:38.045: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217144f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 20:53:50.968: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42029f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 21:46:32.629: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223b8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421cfa3e0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-65f1734f-pcsw gke-bootstrap-e2e-default-pool-65f1734f-pcsw Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:45:23 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-65f1734f-pcsw            gke-bootstrap-e2e-default-pool-65f1734f-pcsw Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:42 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-65f1734f-pcsw gke-bootstrap-e2e-default-pool-65f1734f-pcsw Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:45:23 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-65f1734f-pcsw            gke-bootstrap-e2e-default-pool-65f1734f-pcsw Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:42 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 22:52:08.473: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217d2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35277

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 23:58:33.723: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42232d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 29 00:01:47.029: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b5aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27245

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 20:30:56.080: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421860ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 23:34:18.704: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ad84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37515

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 22:36:09.551: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fcc4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 22:28:59.004: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218a78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 21:04:29.762: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223ad8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 21:50:21.426: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224484f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:148
Expected error:
    <*errors.errorString | 0xc4203aa340>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:122

Issues about this test specifically: #31428

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 23:41:26.058: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222cd8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29521

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 20:41:25.030: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a184f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 22:48:39.187: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42029f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 21:59:11.292: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218844f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 21:40:08.911: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218e4ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 20:34:39.192: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c58ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216fc4f0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-65f1734f-pcsw gke-bootstrap-e2e-default-pool-65f1734f-pcsw Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:45:23 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-65f1734f-pcsw            gke-bootstrap-e2e-default-pool-65f1734f-pcsw Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:42 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-65f1734f-pcsw gke-bootstrap-e2e-default-pool-65f1734f-pcsw Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:45:23 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-65f1734f-pcsw            gke-bootstrap-e2e-default-pool-65f1734f-pcsw Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:42 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-28 19:44:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 21:33:39.460: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42241a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 21:07:49.623: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218e44f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 20:57:24.188: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42241a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 22:22:22.927: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218e44f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203aa340>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 29 00:17:07.291: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cad8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 21:01:18.898: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421860ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 22:19:09.726: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42255a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32087

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 29 00:20:24.719: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222ecef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 22:55:25.717: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222038f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 22:15:37.081: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d418f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 21:36:52.724: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421080ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 29 00:13:53.933: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c88ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 23:55:05.571: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221798f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 20:37:52.370: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222d64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 23:24:05.332: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42229a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37479

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 23:31:05.141: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4200f6ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 28 21:43:20.645: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f7c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1637/
Multiple broken tests:

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc4220a4540>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93
Expected error:
    <*errors.errorString | 0xc420d306b0>: {
        s: "expected pod \"pod-3e5ce051-14e0-11e7-b1b6-0242ac11000b\" success: gave up waiting for pod 'pod-3e5ce051-14e0-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-3e5ce051-14e0-11e7-b1b6-0242ac11000b" success: gave up waiting for pod 'pod-3e5ce051-14e0-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:56
Expected error:
    <*errors.errorString | 0xc421c64310>: {
        s: "expected pod \"pod-secrets-1be8901b-14e1-11e7-b1b6-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-1be8901b-14e1-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-1be8901b-14e1-11e7-b1b6-0242ac11000b" success: gave up waiting for pod 'pod-secrets-1be8901b-14e1-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37529

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc4220bc1a0>: {
        s: "expected pod \"pod-configmaps-db477c6e-14e9-11e7-b1b6-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-db477c6e-14e9-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-db477c6e-14e9-11e7-b1b6-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-db477c6e-14e9-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #27245

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc421ca57e0>: {
        s: "expected pod \"pod-cddf44ad-14e5-11e7-b1b6-0242ac11000b\" success: gave up waiting for pod 'pod-cddf44ad-14e5-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-cddf44ad-14e5-11e7-b1b6-0242ac11000b" success: gave up waiting for pod 'pod-cddf44ad-14e5-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37500

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:180
Expected error:
    <*errors.errorString | 0xc42151b1c0>: {
        s: "expected pod \"downwardapi-volume-d7f97d8d-14e1-11e7-b1b6-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-d7f97d8d-14e1-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-d7f97d8d-14e1-11e7-b1b6-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-d7f97d8d-14e1-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
Expected error:
    <*errors.errorString | 0xc4220a8340>: {
        s: "expected pod \"pod-secrets-4cc30136-14e7-11e7-b1b6-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-4cc30136-14e7-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-4cc30136-14e7-11e7-b1b6-0242ac11000b" success: gave up waiting for pod 'pod-secrets-4cc30136-14e7-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #29221

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:58
Expected error:
    <*errors.errorString | 0xc4220c9160>: {
        s: "expected pod \"downwardapi-volume-e208ea66-14e8-11e7-b1b6-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-e208ea66-14e8-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-e208ea66-14e8-11e7-b1b6-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-e208ea66-14e8-11e7-b1b6-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #36300

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc42038cd60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1640/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 19:59:09.556: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220818f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc42038ad60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34250

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 19:18:03.473: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213f18f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 19:14:51.806: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f518f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc42038ad60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36178

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 23:41:37.202: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420212ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 19:34:47.377: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223e78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 21:09:52.157: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420758ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Mar 30 18:34:12.647: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221b8a00>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 19:08:23.051: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420336ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 19:51:19.711: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211e44f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32023

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc42038ad60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222fc1c0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221f9000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 23:21:30.657: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420758ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 20:02:22.697: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212b38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 19:54:33.303: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d7c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 30 19:05:16.904: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42199c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1643/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc420346790>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 31 21:58:39.258: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422a8a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422f84a80>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 31 21:50:36.994: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422d80c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 31 23:22:19.858: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422dc4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 31 23:19:03.461: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4231e6c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 31 22:05:07.419: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422d99678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 31 22:08:20.477: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422632c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 31 22:01:52.012: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc423266c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37027

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Mar 31 21:19:00.868: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 31 21:54:09.401: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422d77678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26224 #34354

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1649/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-8d4f6560-npkx\" is not ready yet",
        },
    ]
    Resource usage on node "gke-bootstrap-e2e-default-pool-8d4f6560-npkx" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 20:35:26.825: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220aa000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27195

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 20:08:46.122: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422530a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 19:34:48.397: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422588a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:55:56.638: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b35400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:59:51.144: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215b7400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36794

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 20:15:29.482: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42212ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 19:55:32.870: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217eca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35256

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 20:02:20.039: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e1d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 20:32:15.803: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e00000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:21:35.437: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42258ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 18:55:58.653: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d40a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:35:54.030: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222fe000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc4224d72c0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-8d4f6560-npkx boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-8d4f6560-npkx boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:45:49.306: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b34000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36242

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:49:02.541: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220daa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 20:05:31.352: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220b3400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36109

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:14:58.531: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219b8a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 22:08:27.284: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421679400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 22:05:15.448: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42143a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38083

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 19:52:19.371: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42098ea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224e7f10>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-8d4f6560-npkx gke-bootstrap-e2e-default-pool-8d4f6560-npkx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:06:29 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:07:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:06:29 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-pswlz                                 gke-bootstrap-e2e-default-pool-8d4f6560-npkx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:24:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:24:36 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:24:19 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-8d4f6560-npkx            gke-bootstrap-e2e-default-pool-8d4f6560-npkx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:06:31 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:06:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:06:29 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-8d4f6560-npkx gke-bootstrap-e2e-default-pool-8d4f6560-npkx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:06:29 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:07:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:06:29 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-pswlz                                 gke-bootstrap-e2e-default-pool-8d4f6560-npkx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:24:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:24:36 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:24:19 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-8d4f6560-npkx            gke-bootstrap-e2e-default-pool-8d4f6560-npkx Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:06:31 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:06:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 18:06:29 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:30:01.160: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213cca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 20:11:59.249: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215b7400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Expected error:
    <*errors.errorString | 0xc4203acd20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36271

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:39:08.376: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421074a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 20:38:40.245: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d40a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 22:11:40.421: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c8d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc422126910>: {
        s: "service verification failed for: 10.75.243.39\nexpected [service1-dplhw service1-glvxv service1-kd433]\nreceived []",
    }
    service verification failed for: 10.75.243.39
    expected [service1-dplhw service1-glvxv service1-kd433]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:387

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:42:37.949: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211bea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35277

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:18:22.168: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219c8a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 19:58:46.568: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219b9400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:24:52.095: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4228c6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:00:35.671: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e2c000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28283

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 22:14:58.620: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c80a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 21:52:24.314: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42156a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 19:49:02.692: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42258c000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37500

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 20:28:47.542: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e24000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 18:59:11.567: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bb3400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 18:52:47.288: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214aa000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37914

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1652/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85
Expected error:
    <*errors.errorString | 0xc421bc1e20>: {
        s: "expected pod \"pod-95d80f8d-18c4-11e7-be1d-0242ac110006\" success: gave up waiting for pod 'pod-95d80f8d-18c4-11e7-be1d-0242ac110006' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-95d80f8d-18c4-11e7-be1d-0242ac110006" success: gave up waiting for pod 'pod-95d80f8d-18c4-11e7-be1d-0242ac110006' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #34658

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42043b4c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
Expected error:
    <*errors.errorString | 0xc421970340>: {
        s: "expected pod \"pod-314a9162-18be-11e7-be1d-0242ac110006\" success: gave up waiting for pod 'pod-314a9162-18be-11e7-be1d-0242ac110006' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-314a9162-18be-11e7-be1d-0242ac110006" success: gave up waiting for pod 'pod-314a9162-18be-11e7-be1d-0242ac110006' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #29224 #32008 #37564

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1791/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: TearDown {e2e.go}

error during ./hack/e2e-internal/e2e-down.sh: signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203acb70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203acb70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1797/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4227ac000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42038ace0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421904030>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc422528850>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Expected error:
    <*errors.errorString | 0xc42038ace0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #32087

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc422b0c000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc42038ace0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc423236a40>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc421928050>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: TearDown {e2e.go}

error during ./hack/e2e-internal/e2e-down.sh: signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc422360e90>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected error:
    <*errors.errorString | 0xc42038ace0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:73

Issues about this test specifically: #28346

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc42038ace0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:437
Expected error:
    <*errors.errorString | 0xc42038ace0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #33985

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1802/
Multiple broken tests:

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 01:10:02.765: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ceea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 22:24:55.945: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221be000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 23:18:35.308: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c9ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 00:46:54.602: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202bd400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 01:24:21.275: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421691400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 01:13:15.912: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a4e000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 22:43:06.545: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421548a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 22:53:19.390: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215be000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 23:12:04.197: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ab0000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 01:27:53.822: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e2ea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35277

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 23:15:17.535: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42123aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31400

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 22:35:49.197: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c80a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28084

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 01:37:35.952: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f70a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 22:28:24.278: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c9ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29467

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 23:00:04.034: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c74000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 23:05:23.758: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421245400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Expected error:
    <*errors.errorString | 0xc4203c4ef0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34064

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 01:34:24.269: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421874000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 01:41:37.823: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422301400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 01:16:29.482: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213f6000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 00:53:24.179: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d42000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 23:22:19.808: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42124f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31873

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 01:20:49.180: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216c8000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 00:40:30.155: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216c9400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 22:31:37.262: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ba2a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 01:46:50.986: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215e2a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 23:08:35.816: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201c1400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 23:26:07.985: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fae000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Apr 14 21:47:03.128: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42204e000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 00:43:41.472: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421154a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428 #40256

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 22:50:06.350: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421afe000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 22:39:17.299: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215a0000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 14 22:46:48.948: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e77400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37027

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 00:37:11.996: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a8d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 15 01:31:07.293: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c9ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27079

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1804/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203a6200>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Apr 15 13:57:00.994: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Apr 15 15:42:21.120: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-test/1815/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Apr 19 05:39:07.560: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219acdf0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42259dc50>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e19800>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4236043e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217a7580>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:88
Expected error:
    <*errors.errorString | 0xc422584070>: {
        s: "Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:52

Issues about this test specifically: #26191

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc422d4c9c0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221ca6a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nkubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    kubernetes-dashboard-3543765157-g7hdp gke-bootstrap-e2e-default-pool-19c19224-ng05 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-19 05:06:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42039eec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Projects
None yet
Development

No branches or pull requests

9 participants