Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-release-1.5: broken test run #37895

Closed
k8s-github-robot opened this issue Dec 2, 2016 · 112 comments
Closed

ci-kubernetes-e2e-gci-gke-release-1.5: broken test run #37895

k8s-github-robot opened this issue Dec 2, 2016 · 112 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/929/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 2, 2016
@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Dec 3, 2016
@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/1179/

Multiple broken tests:

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 50m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974

Failed: TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058

Failed: DiffResources {e2e.go}

Error: 20 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-4cc678c4  n1-standard-2               2016-12-06T03:43:59.189-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-4cc678c4-grp  us-central1-a  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+gke-bootstrap-e2e-default-pool-4cc678c4-20dl  us-central1-a  n1-standard-2               10.240.0.4   130.211.212.134  RUNNING
+gke-bootstrap-e2e-default-pool-4cc678c4-3ilj  us-central1-a  n1-standard-2               10.240.0.2   104.154.180.80   RUNNING
+gke-bootstrap-e2e-default-pool-4cc678c4-sp8f  us-central1-a  n1-standard-2               10.240.0.3   104.155.171.125  RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-4cc678c4-20dl  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-4cc678c4-3ilj  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-4cc678c4-sp8f  us-central1-a  100      pd-standard  READY
+default-route-a090a51d0ca08a3e                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+default-route-b0fd179850da4d66                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
+gke-bootstrap-e2e-13211a98-accd28db-bba9-11e6-8943-42010af00032  bootstrap-e2e  10.96.0.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-4cc678c4-3ilj  1000
+gke-bootstrap-e2e-13211a98-af64eaed-bba9-11e6-8943-42010af00032  bootstrap-e2e  10.96.1.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-4cc678c4-20dl  1000
+gke-bootstrap-e2e-13211a98-b012a392-bba9-11e6-8943-42010af00032  bootstrap-e2e  10.96.2.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-4cc678c4-sp8f  1000
+gke-bootstrap-e2e-13211a98-all  bootstrap-e2e  10.96.0.0/14       tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-13211a98-ssh  bootstrap-e2e  130.211.200.26/32  tcp:22                                  gke-bootstrap-e2e-13211a98-node
+gke-bootstrap-e2e-13211a98-vms  bootstrap-e2e  10.240.0.0/16      tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-13211a98-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #35658

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/1242/

Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 20 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-0e0093a0  n1-standard-2               2016-12-07T08:17:14.743-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-0e0093a0-grp  us-central1-a  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+gke-bootstrap-e2e-default-pool-0e0093a0-4eyf  us-central1-a  n1-standard-2               10.240.0.4   104.197.45.91    RUNNING
+gke-bootstrap-e2e-default-pool-0e0093a0-8hqy  us-central1-a  n1-standard-2               10.240.0.3   107.178.213.201  RUNNING
+gke-bootstrap-e2e-default-pool-0e0093a0-me1b  us-central1-a  n1-standard-2               10.240.0.2   35.184.34.182    RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-0e0093a0-4eyf  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-0e0093a0-8hqy  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-0e0093a0-me1b  us-central1-a  100      pd-standard  READY
+default-route-347bfcacfd5694cf                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
+default-route-eef2eaeb8c771858                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+gke-bootstrap-e2e-c2116991-09236c23-bc99-11e6-a179-42010af00037  bootstrap-e2e  10.96.0.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-0e0093a0-8hqy  1000
+gke-bootstrap-e2e-c2116991-0a3092a6-bc99-11e6-a179-42010af00037  bootstrap-e2e  10.96.1.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-0e0093a0-4eyf  1000
+gke-bootstrap-e2e-c2116991-0abff657-bc99-11e6-a179-42010af00037  bootstrap-e2e  10.96.2.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-0e0093a0-me1b  1000
+gke-bootstrap-e2e-c2116991-all  bootstrap-e2e  10.96.0.0/14      tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-c2116991-ssh  bootstrap-e2e  35.184.43.127/32  tcp:22                                  gke-bootstrap-e2e-c2116991-node
+gke-bootstrap-e2e-c2116991-vms  bootstrap-e2e  10.240.0.0/16     tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-c2116991-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 50m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/1249/

Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 20 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-0e4b39e4  n1-standard-2               2016-12-07T11:59:44.205-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-0e4b39e4-grp  us-central1-a  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+gke-bootstrap-e2e-default-pool-0e4b39e4-30el  us-central1-a  n1-standard-2               10.240.0.3   104.154.187.88  RUNNING
+gke-bootstrap-e2e-default-pool-0e4b39e4-b7l4  us-central1-a  n1-standard-2               10.240.0.4   130.211.228.43  RUNNING
+gke-bootstrap-e2e-default-pool-0e4b39e4-qurm  us-central1-a  n1-standard-2               10.240.0.2   35.184.71.164   RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-0e4b39e4-30el  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-0e4b39e4-b7l4  us-central1-a  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-0e4b39e4-qurm  us-central1-a  100      pd-standard  READY
+default-route-cd9dd40ead2a5f56                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
+default-route-e0df3df882416eda                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+gke-bootstrap-e2e-424dd421-071d9f85-bcb8-11e6-8001-42010af00035  bootstrap-e2e  10.96.0.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-0e4b39e4-b7l4  1000
+gke-bootstrap-e2e-424dd421-08d0d65d-bcb8-11e6-8001-42010af00035  bootstrap-e2e  10.96.1.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-0e4b39e4-30el  1000
+gke-bootstrap-e2e-424dd421-0ae99c40-bcb8-11e6-8001-42010af00035  bootstrap-e2e  10.96.2.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-0e4b39e4-qurm  1000
+gke-bootstrap-e2e-424dd421-all  bootstrap-e2e  10.96.0.0/14     icmp,esp,ah,sctp,tcp,udp
+gke-bootstrap-e2e-424dd421-ssh  bootstrap-e2e  35.184.66.49/32  tcp:22                                  gke-bootstrap-e2e-424dd421-node
+gke-bootstrap-e2e-424dd421-vms  bootstrap-e2e  10.240.0.0/16    tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-424dd421-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 50m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/3708/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc420415ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Jan 20 10:14:24.100: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.144:8080/dial?request=hostName&protocol=http&host=10.96.1.108&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc420348c90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:169
Jan 20 10:07:44.321: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-cnv7v to expose endpoints map[pod1:[80] pod2:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1603

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:393
Expected error:
    <*errors.errorString | 0xc4203a29e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36970

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/3718/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203fa950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc420ac8830>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc42023e040>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: total pods available: 2, less than the min required: 3",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: total pods available: 2, less than the min required: 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:598

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/3773/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 21 14:53:03.052: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.0.111:8080/dial?request=hostName&protocol=udp&host=10.96.1.135&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:261
kubelet never observed the termination notice
Expected error:
    <*errors.errorString | 0xc42043d920>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:231

Issues about this test specifically: #26224 #34354

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc421215340>: {
        s: "1 containers failed which is more than allowed 0",
    }
    1 containers failed which is more than allowed 0
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203d1690>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Jan 21 14:53:57.440: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38254

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Jan 21 15:00:08.699: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 21 14:56:19.580: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1585

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.180.174 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-network-44b08 e2e-net-client -- /bin/sh -c curl -X POST http://localhost:11301/run/nat-closewait-client -d '{\"RemoteAddr\":\"10.240.0.2:11302\",\"TimeoutSeconds\":10,\"PostFinTimeoutSeconds\":0,\"LeakConnection\":true}' 2>/dev/null] []  <nil>  error: Internal error occurred: error executing command in container: container not found (\"e2e-net-client\")\n [] <nil> 0xc420ea0ab0 exit status 1 <nil> <nil> true [0xc4204f62c0 0xc4204f6538 0xc4204f65d8] [0xc4204f62c0 0xc4204f6538 0xc4204f65d8] [0xc4204f6520 0xc4204f6578] [0x9728b0 0x9728b0] 0xc420cf6540 <nil>}:\nCommand stdout:\n\nstderr:\nerror: Internal error occurred: error executing command in container: container not found (\"e2e-net-client\")\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.180.174 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-network-44b08 e2e-net-client -- /bin/sh -c curl -X POST http://localhost:11301/run/nat-closewait-client -d '{"RemoteAddr":"10.240.0.2:11302","TimeoutSeconds":10,"PostFinTimeoutSeconds":0,"LeakConnection":true}' 2>/dev/null] []  <nil>  error: Internal error occurred: error executing command in container: container not found ("e2e-net-client")
     [] <nil> 0xc420ea0ab0 exit status 1 <nil> <nil> true [0xc4204f62c0 0xc4204f6538 0xc4204f65d8] [0xc4204f62c0 0xc4204f6538 0xc4204f65d8] [0xc4204f6520 0xc4204f6578] [0x9728b0 0x9728b0] 0xc420cf6540 <nil>}:
    Command stdout:
    
    stderr:
    error: Internal error occurred: error executing command in container: container not found ("e2e-net-client")
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3591

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 21 14:54:38.866: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/3918/
Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:67
Expected error:
    <*errors.StatusError | 0xc42024fe00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-1ef8c167-cx5x:4194/containers/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-1ef8c167-cx5x:4194/containers/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-1ef8c167-cx5x:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:325

Issues about this test specifically: #37435

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Expected error:
    <*errors.errorString | 0xc420a62180>: {
        s: "failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.212.205 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-wz2h1 pet-1 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: ssh: rejected: connect failed (Connection timed out)\n [] <nil> 0xc420c6c450 exit status 1 <nil> <nil> true [0xc42046e278 0xc42046e290 0xc42046e2b0] [0xc42046e278 0xc42046e290 0xc42046e2b0] [0xc42046e288 0xc42046e2a8] [0x9728b0 0x9728b0] 0xc420e91020 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: error dialing backend: ssh: rejected: connect failed (Connection timed out)\n\nerror:\nexit status 1\n",
    }
    failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.212.205 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-wz2h1 pet-1 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: ssh: rejected: connect failed (Connection timed out)
     [] <nil> 0xc420c6c450 exit status 1 <nil> <nil> true [0xc42046e278 0xc42046e290 0xc42046e2b0] [0xc42046e278 0xc42046e290 0xc42046e2b0] [0xc42046e288 0xc42046e2a8] [0x9728b0 0x9728b0] 0xc420e91020 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: error dialing backend: ssh: rejected: connect failed (Connection timed out)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:88
Expected error:
    <*errors.errorString | 0xc42036d290>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:76

Issues about this test specifically: #26191

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420e78a10>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/3927/
Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc420fd4620>: {
        s: "service verification failed for: 10.99.245.170\nexpected [service2-lhtjl service2-q07lj service2-w5cp1]\nreceived [service2-lhtjl service2-w5cp1]",
    }
    service verification failed for: 10.99.245.170
    expected [service2-lhtjl service2-q07lj service2-w5cp1]
    received [service2-lhtjl service2-w5cp1]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.44.145 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-mqhsp execpod-sourceip-gke-bootstrap-e2e-default-pool-12c25a56-rg4bs7 -- /bin/sh -c wget -T 30 -qO- 10.99.252.33:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc4212bc660 exit status 1 <nil> <nil> true [0xc42142a398 0xc42142a3b0 0xc42142a3c8] [0xc42142a398 0xc42142a3b0 0xc42142a3c8] [0xc42142a3a8 0xc42142a3c0] [0x9728b0 0x9728b0] 0xc4212c65a0 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.44.145 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-mqhsp execpod-sourceip-gke-bootstrap-e2e-default-pool-12c25a56-rg4bs7 -- /bin/sh -c wget -T 30 -qO- 10.99.252.33:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc4212bc660 exit status 1 <nil> <nil> true [0xc42142a398 0xc42142a3b0 0xc42142a3c8] [0xc42142a398 0xc42142a3b0 0xc42142a3c8] [0xc42142a3a8 0xc42142a3c0] [0x9728b0 0x9728b0] 0xc4212c65a0 <nil>}:
    Command stdout:
    
    stderr:
    wget: download timed out
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 24 11:32:42.839: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc42128ec00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.96.1.111:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.96.1.111:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.96.1.111:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/3929/
Multiple broken tests:

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Jan 24 12:51:16.171: Could not reach HTTP service through 104.155.155.140:30824 after 5m0s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2530

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420dc1810>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203c22a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.StatusError | 0xc42119e680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.96.0.157:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.96.0.157:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.96.0.157:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:243

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc42098e0a0>: {
        s: "service verification failed for: 10.99.255.214\nexpected [service1-87n86 service1-gjsv7 service1-n8lsn]\nreceived [service1-87n86 wget: download timed out]",
    }
    service verification failed for: 10.99.255.214
    expected [service1-87n86 service1-gjsv7 service1-n8lsn]
    received [service1-87n86 wget: download timed out]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 24 12:54:41.442: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Jan 24 12:56:51.496: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.0.146:8080/dial?request=hostName&protocol=http&host=10.96.1.100&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
Expected
    <time.Duration>: 107274174312
to be <
    <time.Duration>: 30000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #35297

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.242.10 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-d0tfv execpod-sourceip-gke-bootstrap-e2e-default-pool-66b7ad5a-k4h5fr -- /bin/sh -c wget -T 30 -qO- 10.99.245.19:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc4213da960 exit status 1 <nil> <nil> true [0xc4200363a8 0xc420036480 0xc420036660] [0xc4200363a8 0xc420036480 0xc420036660] [0xc420036408 0xc420036588] [0x9728b0 0x9728b0] 0xc4212f4a80 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.242.10 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-d0tfv execpod-sourceip-gke-bootstrap-e2e-default-pool-66b7ad5a-k4h5fr -- /bin/sh -c wget -T 30 -qO- 10.99.245.19:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc4213da960 exit status 1 <nil> <nil> true [0xc4200363a8 0xc420036480 0xc420036660] [0xc4200363a8 0xc420036480 0xc420036660] [0xc420036408 0xc420036588] [0x9728b0 0x9728b0] 0xc4212f4a80 <nil>}:
    Command stdout:
    
    stderr:
    wget: download timed out
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 24 12:58:23.848: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.209:8080/dial?request=hostName&protocol=udp&host=10.96.1.114&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/3991/
Multiple broken tests:

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc420ce2c60>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-25 17:52:14 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-25 17:52:44 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-25 17:52:14 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.96.1.157 StartTime:2017-01-25 17:52:14 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420c25b90} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://a124097870a9944b7a0508d671db3367f201dc1321dd5b11382a6eda5d8c874a}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-25 17:52:14 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-25 17:52:44 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-25 17:52:14 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.96.1.157 StartTime:2017-01-25 17:52:14 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420c25b90} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://a124097870a9944b7a0508d671db3367f201dc1321dd5b11382a6eda5d8c874a}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Jan 25 17:56:46.464: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.0.178:8080/dial?request=hostName&protocol=http&host=10.96.2.127&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 25 17:57:33.323: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 25 18:17:25.588: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc42034ec80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4006/
Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc420ae1a00>: {
        s: "service verification failed for: 10.99.240.220\nexpected [service2-js52b service2-pqkl1 service2-wk0zb]\nreceived [service2-js52b service2-wk0zb]",
    }
    service verification failed for: 10.99.240.220
    expected [service2-js52b service2-pqkl1 service2-wk0zb]
    received [service2-js52b service2-wk0zb]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc4209d3880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.96.1.245:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.96.1.245:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.96.1.245:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:272
0 (0; 2m7.239952883s): path /api/v1/namespaces/e2e-tests-proxy-8shzj/pods/https:proxy-service-wr9n7-wklxl:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.2.158:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.2.158:443/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.259957316s): path /api/v1/namespaces/e2e-tests-proxy-8shzj/pods/proxy-service-wr9n7-wklxl/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.2.158:80/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'http://10.96.2.158:80/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 2m7.230637764s): path /api/v1/namespaces/e2e-tests-proxy-8shzj/pods/https:proxy-service-wr9n7-wklxl:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.2.158:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.2.158:443/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 2m7.227885095s): path /api/v1/namespaces/e2e-tests-proxy-8shzj/pods/https:proxy-service-wr9n7-wklxl:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.2.158:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.2.158:443/' }],RetryAfterSeconds:0,} Code:503}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:270

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc420477c10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
validating pre-stop.
Expected error:
    <*errors.errorString | 0xc4203e51c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:159

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc420e16230>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Jan 26 01:32:14.494: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc4209add20>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 01:27:24 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 01:27:55 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 01:27:24 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.96.2.149 StartTime:2017-01-26 01:27:24 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420c378f0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://745c0dcc320e7109a8e7b63e39cccc7977b1f3731a31fb3a63ced73cafccbc33}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 01:27:24 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 01:27:55 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-26 01:27:24 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.96.2.149 StartTime:2017-01-26 01:27:24 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420c378f0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://745c0dcc320e7109a8e7b63e39cccc7977b1f3731a31fb3a63ced73cafccbc33}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:493
Expected
    <int>: 1
to equal
    <int>: 42
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:463

Issues about this test specifically: #31151 #35586

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Expected error:
    <*errors.errorString | 0xc42042cff0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:346

Issues about this test specifically: #38083

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 26 01:42:48.951: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4018/
Multiple broken tests:

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 07:32:45.147: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421676000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26191

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc420413dc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #35283 #36867

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 07:33:17.163: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421294a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37056

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420d9c290>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:240
Expected success, but got an error:
    <*errors.errorString | 0xc420fe42a0>: {
        s: "failed running \"mkdir /tmp/node-problem-detector-1208ff5f-e3dc-11e6-bd0e-0242ac11000a; > /tmp/node-problem-detector-1208ff5f-e3dc-11e6-bd0e-0242ac11000a/test.log\": error getting SSH client to jenkins@104.154.188.233:22: 'dial tcp 104.154.188.233:22: getsockopt: connection timed out' (exit code 0)",
    }
    failed running "mkdir /tmp/node-problem-detector-1208ff5f-e3dc-11e6-bd0e-0242ac11000a; > /tmp/node-problem-detector-1208ff5f-e3dc-11e6-bd0e-0242ac11000a/test.log": error getting SSH client to jenkins@104.154.188.233:22: 'dial tcp 104.154.188.233:22: getsockopt: connection timed out' (exit code 0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:156

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 26 07:38:52.270: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.154:8080/dial?request=hostName&protocol=udp&host=10.96.0.135&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:57
Expected error:
    <*errors.errorString | 0xc420982030>: {
        s: "Timed out when waiting for proxy to gather metrics from gke-bootstrap-e2e-default-pool-9ef4760e-5hsl",
    }
    Timed out when waiting for proxy to gather metrics from gke-bootstrap-e2e-default-pool-9ef4760e-5hsl
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:55

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 26 07:32:20.989: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421575400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4019/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Jan 26 08:24:47.255: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.103:8080/dial?request=hostName&protocol=http&host=10.96.1.78&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.227.202 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-qh5zj execpod-sourceip-gke-bootstrap-e2e-default-pool-86ba72f8-dmp2c3 -- /bin/sh -c wget -T 30 -qO- 10.99.254.251:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc421488450 exit status 1 <nil> <nil> true [0xc420cbe008 0xc420cbe020 0xc420cbe038] [0xc420cbe008 0xc420cbe020 0xc420cbe038] [0xc420cbe018 0xc420cbe030] [0x9728b0 0x9728b0] 0xc42125c360 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.227.202 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-qh5zj execpod-sourceip-gke-bootstrap-e2e-default-pool-86ba72f8-dmp2c3 -- /bin/sh -c wget -T 30 -qO- 10.99.254.251:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc421488450 exit status 1 <nil> <nil> true [0xc420cbe008 0xc420cbe020 0xc420cbe038] [0xc420cbe008 0xc420cbe020 0xc420cbe038] [0xc420cbe018 0xc420cbe030] [0x9728b0 0x9728b0] 0xc42125c360 <nil>}:
    Command stdout:
    
    stderr:
    wget: download timed out
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Jan 26 08:24:19.644: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:149

Issues about this test specifically: #32087

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*strconv.NumError | 0xc4212eb6e0>: {
        Func: "ParseInt",
        Num: "",
        Err: {s: "invalid syntax"},
    }
    strconv.ParseInt: parsing "": invalid syntax
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:192

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc420aac2f0>: {
        s: "service verification failed for: 10.99.242.155\nexpected [service2-dtv98 service2-qktbb service2-zrs4w]\nreceived [service2-dtv98 service2-zrs4w]",
    }
    service verification failed for: 10.99.242.155
    expected [service2-dtv98 service2-qktbb service2-zrs4w]
    received [service2-dtv98 service2-zrs4w]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:340

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc420bbc530>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:477

Issues about this test specifically: #28339 #36379

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc4209e4450>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Jan 26 08:35:21.892: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Expected error:
    <*errors.errorString | 0xc42042f940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #37056

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Jan 26 08:24:55.742: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163

Issues about this test specifically: #32023

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Jan 26 08:27:20.327: Could not reach HTTP service through 35.184.22.85:31713 after 5m0s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2530

Issues about this test specifically: #28064 #28569 #34036

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4023/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Jan 26 10:33:52.437: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:218
Expected error:
    <*errors.StatusError | 0xc421412080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'ssh: rejected: connect failed (Connection timed out)'\\nTrying to reach: 'http://10.96.2.158:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.2.158:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.2.158:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #29933 #34111 #38765

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421144550>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 26 10:34:38.289: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1585

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*strconv.NumError | 0xc421434930>: {
        Func: "ParseInt",
        Num: "",
        Err: {s: "invalid syntax"},
    }
    strconv.ParseInt: parsing "": invalid syntax
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:192

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Jan 26 10:31:42.269: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.96.2.151 8081
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Jan 26 10:48:29.984: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27196 #28998 #32403 #33341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4024/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Jan 26 11:35:03.959: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.164:8080/dial?request=hostName&protocol=http&host=10.96.1.133&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc4218ba540>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 26 11:48:04.812: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Jan 26 11:39:07.979: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38083

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203aac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420388c00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc4206f2500>: {
        s: "service verification failed for: 10.99.250.47\nexpected [service1-0c8rs service1-97bfp service1-v39m6]\nreceived [service1-0c8rs service1-v39m6]",
    }
    service verification failed for: 10.99.250.47
    expected [service1-0c8rs service1-97bfp service1-v39m6]
    received [service1-0c8rs service1-v39m6]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:253
Jan 26 11:28:27.827: Timed out waiting for service multi-endpoint-test in namespace e2e-tests-services-r9z9x to expose endpoints map[pod1:[100]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1690

Issues about this test specifically: #29831

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Jan 26 11:32:52.857: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.96.1.136 8081
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Expected error:
    <*errors.errorString | 0xc420d09210>: {
        s: "failed to execute touch /data/1485458831779241262, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.165.100 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-w7s7f pet-0 -- /bin/sh -c touch /data/1485458831779241262] []  <nil>  error: Timeout occured\n [] <nil> 0xc42100e690 exit status 1 <nil> <nil> true [0xc420e22120 0xc420e22138 0xc420e22150] [0xc420e22120 0xc420e22138 0xc420e22150] [0xc420e22130 0xc420e22148] [0x9728b0 0x9728b0] 0xc42103ad20 <nil>}:\nCommand stdout:\n\nstderr:\nerror: Timeout occured\n\nerror:\nexit status 1\n",
    }
    failed to execute touch /data/1485458831779241262, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.165.100 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-w7s7f pet-0 -- /bin/sh -c touch /data/1485458831779241262] []  <nil>  error: Timeout occured
     [] <nil> 0xc42100e690 exit status 1 <nil> <nil> true [0xc420e22120 0xc420e22138 0xc420e22150] [0xc420e22120 0xc420e22138 0xc420e22150] [0xc420e22130 0xc420e22148] [0x9728b0 0x9728b0] 0xc42103ad20 <nil>}:
    Command stdout:
    
    stderr:
    error: Timeout occured
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #37361 #37919

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4043/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc4203fbf60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 26 21:01:13.151: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.157:8080/dial?request=hostName&protocol=udp&host=10.96.0.108&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc4212e5180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.96.1.211:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.96.1.211:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.96.1.211:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:243

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 26 21:03:59.966: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1580

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4204128e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4049/
Multiple broken tests:

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc4208eb140>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Jan 27 00:00:07.465: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
validating pre-stop.
Expected error:
    <*errors.errorString | 0xc42041b580>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:159

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 27 00:07:35.593: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.0.180:8080/dial?request=hostName&protocol=udp&host=10.96.2.135&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.3.99 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-pr97c execpod-sourceip-gke-bootstrap-e2e-default-pool-dd3810f9-gv8glb -- /bin/sh -c wget -T 30 -qO- 10.99.241.125:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc42146ce70 exit status 1 <nil> <nil> true [0xc4203d63b8 0xc4203d63f0 0xc4203d6450] [0xc4203d63b8 0xc4203d63f0 0xc4203d6450] [0xc4203d63d0 0xc4203d6438] [0x9728b0 0x9728b0] 0xc421440d80 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.3.99 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-pr97c execpod-sourceip-gke-bootstrap-e2e-default-pool-dd3810f9-gv8glb -- /bin/sh -c wget -T 30 -qO- 10.99.241.125:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc42146ce70 exit status 1 <nil> <nil> true [0xc4203d63b8 0xc4203d63f0 0xc4203d6450] [0xc4203d63b8 0xc4203d63f0 0xc4203d6450] [0xc4203d63d0 0xc4203d6438] [0x9728b0 0x9728b0] 0xc421440d80 <nil>}:
    Command stdout:
    
    stderr:
    wget: download timed out
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Jan 26 23:47:58.851: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163

Issues about this test specifically: #32023

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Jan 27 00:03:23.015: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.96.2.133:8080/hostName
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Jan 26 23:48:09.334: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.96.2.128 8081
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4059/
Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
Expected
    <time.Duration>: 32329128285
to be <
    <time.Duration>: 30000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #35297

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Jan 27 05:01:31.518: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Jan 27 05:03:47.838: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:149

Issues about this test specifically: #26870 #36429

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc4215ee070>: {
        s: "service verification failed for: 10.99.240.120\nexpected [service1-24z3f service1-5sm39 service1-css31]\nreceived [service1-5sm39 service1-css31 wget: download timed out]",
    }
    service verification failed for: 10.99.240.120
    expected [service1-24z3f service1-5sm39 service1-css31]
    received [service1-5sm39 service1-css31 wget: download timed out]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc4206a2280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.96.2.132:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.96.2.132:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.96.2.132:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:212

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203fa9b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.136.86 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-05x1r execpod-sourceip-gke-bootstrap-e2e-default-pool-6b71393f-tq54d2 -- /bin/sh -c wget -T 30 -qO- 10.99.243.234:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc42110ccf0 exit status 1 <nil> <nil> true [0xc420454050 0xc420454070 0xc4204540a0] [0xc420454050 0xc420454070 0xc4204540a0] [0xc420454060 0xc420454088] [0x9728b0 0x9728b0] 0xc420f84900 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.136.86 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-05x1r execpod-sourceip-gke-bootstrap-e2e-default-pool-6b71393f-tq54d2 -- /bin/sh -c wget -T 30 -qO- 10.99.243.234:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc42110ccf0 exit status 1 <nil> <nil> true [0xc420454050 0xc420454070 0xc4204540a0] [0xc420454050 0xc420454070 0xc4204540a0] [0xc420454060 0xc420454088] [0x9728b0 0x9728b0] 0xc420f84900 <nil>}:
    Command stdout:
    
    stderr:
    wget: download timed out
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 27 05:07:32.274: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Jan 27 05:00:39.158: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163

Issues about this test specifically: #30981

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 27 05:15:31.449: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1580

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 27 05:04:24.344: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.141:8080/dial?request=hostName&protocol=udp&host=10.96.0.92&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 27 05:10:23.154: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc420414eb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc4209c4250>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:477

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Jan 27 05:03:01.306: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.96.0.98:8080/hostName
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Jan 27 05:04:24.950: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.121:8080/dial?request=hostName&protocol=http&host=10.96.0.72&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Jan 27 05:05:59.430: Could not reach HTTP service through 104.154.173.87:30577 after 5m0s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2530

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Jan 27 05:01:49.127: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.96.0.87 8081
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4091/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 27 20:42:04.465: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.112:8080/dial?request=hostName&protocol=udp&host=10.96.0.101&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*strconv.NumError | 0xc4213ea0c0>: {
        Func: "ParseInt",
        Num: "",
        Err: {s: "invalid syntax"},
    }
    strconv.ParseInt: parsing "": invalid syntax
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:192

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc42151c5d0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203bf790>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Jan 27 20:45:22.790: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:149

Issues about this test specifically: #32087

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc4211d6380>: {
        s: "service verification failed for: 10.99.254.217\nexpected [service1-0g6gc service1-dgqdq service1-j0s5m]\nreceived [service1-0g6gc service1-dgqdq]",
    }
    service verification failed for: 10.99.254.217
    expected [service1-0g6gc service1-dgqdq service1-j0s5m]
    received [service1-0g6gc service1-dgqdq]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 27 20:51:27.466: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1580

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4106/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc42118b980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'ssh: rejected: connect failed (Connection timed out)'\\nTrying to reach: 'http://10.96.1.82:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.1.82:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.1.82:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203d0990>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:74
pod should have a restart count of 0 but got 1
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:73

Issues about this test specifically: #29521

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 28 03:57:56.974: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.120:8080/dial?request=hostName&protocol=udp&host=10.96.1.139&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Jan 28 04:06:30.001: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.96.1.173 8081
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc420576200>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:953

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc42064ac20>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
Expected error:
    <*errors.errorString | 0xc420c24360>: {
        s: "failed to get logs from pod-secrets-7e630a6c-e54f-11e6-9624-0242ac110002 for secret-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-secrets-7e630a6c-e54f-11e6-9624-0242ac110002)",
    }
    failed to get logs from pod-secrets-7e630a6c-e54f-11e6-9624-0242ac110002 for secret-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-secrets-7e630a6c-e54f-11e6-9624-0242ac110002)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29221

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Jan 28 03:52:27.606: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc42032ec40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 28 04:04:05.834: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:226
Expected error:
    <*errors.errorString | 0xc4203fd210>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:204

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 28 03:52:11.842: Couldn't delete ns: "e2e-tests-job-tgnq4": namespace e2e-tests-job-tgnq4 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-job-tgnq4 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc420eadf40>: {
        s: "1 containers failed which is more than allowed 0",
    }
    1 containers failed which is more than allowed 0
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc420fd2250>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc420c88980>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:395

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:272
0: path /api/v1/namespaces/e2e-tests-proxy-2p4ml/pods/proxy-service-zt9bw-dglnn/proxy/ took 3m32.552800617s > 30s
0: path /api/v1/namespaces/e2e-tests-proxy-2p4ml/pods/https:proxy-service-zt9bw-dglnn:443/proxy/ took 3m32.557342374s > 30s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:270

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc4208cc890>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4115/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Jan 28 08:45:01.967: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.96.2.88 8081
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234
Jan 28 08:43:00.610: pod e2e-tests-container-probe-hv5s8/liveness-http - expected number of restarts: 0, found restarts: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:403

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Expected error:
    <*errors.errorString | 0xc4203d16d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:269

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4125/
Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Expected error:
    <*errors.errorString | 0xc420648eb0>: {
        s: "failed to execute touch /data/1485641816110769526, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.227.202 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-x48gh pet-1 -- /bin/sh -c touch /data/1485641816110769526] []  <nil>  Error from server: error dialing backend: ssh: unexpected packet in response to channel open: <nil>\n [] <nil> 0xc420ff44b0 exit status 1 <nil> <nil> true [0xc42076c000 0xc42076c018 0xc42076c030] [0xc42076c000 0xc42076c018 0xc42076c030] [0xc42076c010 0xc42076c028] [0x9728b0 0x9728b0] 0xc420a4a4e0 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: error dialing backend: ssh: unexpected packet in response to channel open: <nil>\n\nerror:\nexit status 1\n",
    }
    failed to execute touch /data/1485641816110769526, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.227.202 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-x48gh pet-1 -- /bin/sh -c touch /data/1485641816110769526] []  <nil>  Error from server: error dialing backend: ssh: unexpected packet in response to channel open: <nil>
     [] <nil> 0xc420ff44b0 exit status 1 <nil> <nil> true [0xc42076c000 0xc42076c018 0xc42076c030] [0xc42076c000 0xc42076c018 0xc42076c030] [0xc42076c010 0xc42076c028] [0x9728b0 0x9728b0] 0xc420a4a4e0 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: error dialing backend: ssh: unexpected packet in response to channel open: <nil>
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Jan 28 14:29:18.860: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc420dd2b00>: {
        s: "service verification failed for: 10.99.241.234\nexpected [service1-3469d service1-h35m5 service1-lrdp4]\nreceived [service1-3469d service1-lrdp4]",
    }
    service verification failed for: 10.99.241.234
    expected [service1-3469d service1-h35m5 service1-lrdp4]
    received [service1-3469d service1-lrdp4]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Jan 28 14:24:40.368: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38439

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 28 14:25:37.132: Entry to guestbook wasn't correctly added in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1590

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc4211ee3e0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:477

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc421099b90>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-28 14:33:21 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-28 14:33:52 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-28 14:33:21 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.96.2.203 StartTime:2017-01-28 14:33:21 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420de22a0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://366c4848dbad322aca89dd1bb0f98b4fa7401fa4be9bc404c49fee3787509a3c}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-28 14:33:21 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-28 14:33:52 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-28 14:33:21 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.96.2.203 StartTime:2017-01-28 14:33:21 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420de22a0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://366c4848dbad322aca89dd1bb0f98b4fa7401fa4be9bc404c49fee3787509a3c}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4126/
Multiple broken tests:

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:154
Timed out after 300.000s.
Expected
    <string>: content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    
to contain substring
    <string>: value-2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:153

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:253
Jan 28 15:12:59.940: Timed out waiting for service multi-endpoint-test in namespace e2e-tests-services-dbw4f to expose endpoints map[pod1:[100]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1690

Issues about this test specifically: #29831

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Jan 28 15:22:57.154: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Jan 28 15:16:56.990: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.0.40:8080/dial?request=hostName&protocol=http&host=10.96.2.30&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4128/
Multiple broken tests:

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:88
Expected error:
    <*errors.errorString | 0xc4203ce330>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:76

Issues about this test specifically: #26191

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Jan 28 16:29:16.953: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Jan 28 16:27:30.876: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.96.0.102 8081
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:67
Expected
    <time.Duration>: 104354634194
to be <
    <time.Duration>: 30000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #37435

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc420429690>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 28 16:43:01.516: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1580

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:272
0 (0; 2m7.278313384s): path /api/v1/namespaces/e2e-tests-proxy-4l4xd/pods/proxy-service-fhv88-tr3kt/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.0.61:80/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'http://10.96.0.61:80/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.278849741s): path /api/v1/namespaces/e2e-tests-proxy-4l4xd/pods/https:proxy-service-fhv88-tr3kt:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:443/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.297737811s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4l4xd/services/https:proxy-service-fhv88:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.30265468s): path /api/v1/namespaces/e2e-tests-proxy-4l4xd/pods/https:proxy-service-fhv88-tr3kt:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.304829582s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4l4xd/services/https:proxy-service-fhv88:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.30487753s): path /api/v1/namespaces/e2e-tests-proxy-4l4xd/services/https:proxy-service-fhv88:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 2m7.218056311s): path /api/v1/namespaces/e2e-tests-proxy-4l4xd/services/https:proxy-service-fhv88:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 2m7.344333434s): path /api/v1/namespaces/e2e-tests-proxy-4l4xd/pods/proxy-service-fhv88-tr3kt/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.0.61:80/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'http://10.96.0.61:80/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 2m16.089003871s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4l4xd/services/https:proxy-service-fhv88:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 2m16.087376897s): path /api/v1/namespaces/e2e-tests-proxy-4l4xd/pods/https:proxy-service-fhv88-tr3kt:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 2m16.088915798s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4l4xd/services/https:proxy-service-fhv88:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 2m7.319214977s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4l4xd/services/https:proxy-service-fhv88:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 2m23.42663888s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4l4xd/services/https:proxy-service-fhv88:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 2m23.427523514s): path /api/v1/namespaces/e2e-tests-proxy-4l4xd/services/https:proxy-service-fhv88:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 2m23.427847509s): path /api/v1/namespaces/e2e-tests-proxy-4l4xd/pods/https:proxy-service-fhv88-tr3kt:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.0.61:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.0.61:462/' }],RetryAfterSeconds:0,} Code:503}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:270

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc4203a8ee0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 28 16:27:28.852: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:226
Expected error:
    <*errors.errorString | 0xc421151240>: {
        s: "3 containers failed which is more than allowed 1",
    }
    3 containers failed which is more than allowed 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:198

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc42069f4a0>: {
        s: "service verification failed for: 10.99.255.91\nexpected [service1-73fx9 service1-hszq8 service1-n9chs]\nreceived [service1-hszq8 service1-n9chs]",
    }
    service verification failed for: 10.99.255.91
    expected [service1-73fx9 service1-hszq8 service1-n9chs]
    received [service1-hszq8 service1-n9chs]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.147.100 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-2pdwz execpod-sourceip-gke-bootstrap-e2e-default-pool-ec373344-ng77tp -- /bin/sh -c wget -T 30 -qO- 10.99.245.34:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc420a82de0 exit status 1 <nil> <nil> true [0xc420466600 0xc420466650 0xc420466690] [0xc420466600 0xc420466650 0xc420466690] [0xc420466640 0xc420466680] [0x9728b0 0x9728b0] 0xc4218eae40 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.147.100 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-2pdwz execpod-sourceip-gke-bootstrap-e2e-default-pool-ec373344-ng77tp -- /bin/sh -c wget -T 30 -qO- 10.99.245.34:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc420a82de0 exit status 1 <nil> <nil> true [0xc420466600 0xc420466650 0xc420466690] [0xc420466600 0xc420466650 0xc420466690] [0xc420466640 0xc420466680] [0x9728b0 0x9728b0] 0xc4218eae40 <nil>}:
    Command stdout:
    
    stderr:
    wget: download timed out
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 28 16:39:11.818: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420aa1410>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc42039ce80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4132/
Multiple broken tests:

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 28 18:42:46.450: Couldn't delete ns: "e2e-tests-nettest-x9m06": namespace e2e-tests-nettest-x9m06 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-nettest-x9m06 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 28 18:42:44.267: Couldn't delete ns: "e2e-tests-kubectl-v5mfc": namespace e2e-tests-kubectl-v5mfc was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-kubectl-v5mfc was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Expected error:
    <*errors.errorString | 0xc4203acc50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:269

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4136/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:393
Expected error:
    <*errors.errorString | 0xc420406e10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36970

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Jan 28 20:58:05.473: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38083

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203d2f70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4137/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Jan 28 21:32:21.246: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4204589f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203faf60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 28 21:25:45.347: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.1.97:8080/dial?request=hostName&protocol=udp&host=10.96.2.94&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420346050>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 28 21:28:28.220: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1585

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 28 21:38:49.419: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Jan 28 21:32:53.563: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc420f83870>: {
        s: "service verification failed for: 10.99.253.71\nexpected [service1-jk8r8 service1-k39wc service1-q2q5q]\nreceived [service1-jk8r8 service1-q2q5q]",
    }
    service verification failed for: 10.99.253.71
    expected [service1-jk8r8 service1-k39wc service1-q2q5q]
    received [service1-jk8r8 service1-q2q5q]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.48.55 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-gcqz0 execpod-sourceip-gke-bootstrap-e2e-default-pool-edb3c9a2-2tk1wm -- /bin/sh -c wget -T 30 -qO- 10.99.241.93:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc421304870 exit status 1 <nil> <nil> true [0xc4202ee2e8 0xc4202ee300 0xc4202ee318] [0xc4202ee2e8 0xc4202ee300 0xc4202ee318] [0xc4202ee2f8 0xc4202ee310] [0x9728b0 0x9728b0] 0xc42124e8a0 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.48.55 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-gcqz0 execpod-sourceip-gke-bootstrap-e2e-default-pool-edb3c9a2-2tk1wm -- /bin/sh -c wget -T 30 -qO- 10.99.241.93:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc421304870 exit status 1 <nil> <nil> true [0xc4202ee2e8 0xc4202ee300 0xc4202ee318] [0xc4202ee2e8 0xc4202ee300 0xc4202ee318] [0xc4202ee2f8 0xc4202ee310] [0x9728b0 0x9728b0] 0xc42124e8a0 <nil>}:
    Command stdout:
    
    stderr:
    wget: download timed out
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Jan 28 21:29:10.220: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:149

Issues about this test specifically: #32087

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Jan 28 21:40:02.470: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.0.169:8080/dial?request=hostName&protocol=http&host=10.96.2.182&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:272
0 (0; 2m7.220925198s): path /api/v1/proxy/namespaces/e2e-tests-proxy-vbwbt/pods/proxy-service-65t35-85b17:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.2.132:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'http://10.96.2.132:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.220882917s): path /api/v1/proxy/namespaces/e2e-tests-proxy-vbwbt/pods/http:proxy-service-65t35-85b17:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.2.132:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'http://10.96.2.132:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.22088199s): path /api/v1/namespaces/e2e-tests-proxy-vbwbt/pods/http:proxy-service-65t35-85b17:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.2.132:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'http://10.96.2.132:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.361862989s): path /api/v1/namespaces/e2e-tests-proxy-vbwbt/pods/https:proxy-service-65t35-85b17:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.2.132:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.2.132:443/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.361949206s): path /api/v1/namespaces/e2e-tests-proxy-vbwbt/pods/proxy-service-65t35-85b17/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.2.132:80/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'http://10.96.2.132:80/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.361340301s): path /api/v1/namespaces/e2e-tests-proxy-vbwbt/pods/proxy-service-65t35-85b17:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.2.132:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'http://10.96.2.132:1080/' }],RetryAfterSeconds:0,} Code:503}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:270

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:220
Jan 28 21:20:01.074: Pod did not start running: pod ran to completion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:184

Issues about this test specifically: #26955

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4150/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 29 03:37:02.480: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc42036c8d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc420bb8250>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-29 03:18:15 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-29 03:18:47 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-29 03:18:15 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.96.2.34 StartTime:2017-01-29 03:18:15 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4209fdc00} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://db815de4dbd11945494fbc525659582a074546376246374457f4c5b25be1b944}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-29 03:18:15 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-29 03:18:47 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-29 03:18:15 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.96.2.34 StartTime:2017-01-29 03:18:15 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4209fdc00} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://db815de4dbd11945494fbc525659582a074546376246374457f4c5b25be1b944}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 29 03:29:28.851: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc420412eb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:629
Jan 29 03:18:33.152: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:626

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203c53b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 29 03:32:09.126: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1580

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc42043db80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc420f1d080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.96.2.117:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.96.2.117:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.96.2.117:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:212

Issues about this test specifically: #27196 #28998 #32403 #33341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4223/
Multiple broken tests:

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc42059dbb0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:68
Expected error:
    <*errors.errorString | 0xc420b2c8c0>: {
        s: "failed to get logs from pod-configmaps-162d217d-e729-11e6-a92b-0242ac110007 for configmap-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-configmaps-162d217d-e729-11e6-a92b-0242ac110007)",
    }
    failed to get logs from pod-configmaps-162d217d-e729-11e6-a92b-0242ac110007 for configmap-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-configmaps-162d217d-e729-11e6-a92b-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc420ca7d60>: {
        s: "service verification failed for: 10.99.240.11\nexpected [service1-7p0kz service1-dl4fb service1-l7mlp]\nreceived []",
    }
    service verification failed for: 10.99.240.11
    expected [service1-7p0kz service1-dl4fb service1-l7mlp]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Expected error:
    <*errors.errorString | 0xc420f55520>: {
        s: "failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-g8qzp pet-0 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?\n [] <nil> 0xc42128b4d0 exit status 1 <nil> <nil> true [0xc4204341e0 0xc420434250 0xc420434268] [0xc4204341e0 0xc420434250 0xc420434268] [0xc420434218 0xc420434260] [0x9728b0 0x9728b0] 0xc42126f200 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?\n\nerror:\nexit status 1\n",
    }
    failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-g8qzp pet-0 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
     [] <nil> 0xc42128b4d0 exit status 1 <nil> <nil> true [0xc4204341e0 0xc420434250 0xc420434268] [0xc4204341e0 0xc420434250 0xc420434268] [0xc420434218 0xc420434260] [0x9728b0 0x9728b0] 0xc42126f200 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:197
Expected error:
    <*errors.errorString | 0xc420654300>: {
        s: "failed to get logs from pod-configmaps-4bc59b6c-e728-11e6-897b-0242ac110007 for env-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-configmaps-4bc59b6c-e728-11e6-897b-0242ac110007)",
    }
    failed to get logs from pod-configmaps-4bc59b6c-e728-11e6-897b-0242ac110007 for env-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-configmaps-4bc59b6c-e728-11e6-897b-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #27079

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:66
Expected error:
    <*errors.StatusError | 0xc420180500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \\\"gke-139e3e9e73e172ec2833\\\"?'\\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-95bbe0ec-7bnh:10250/logs/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-95bbe0ec-7bnh:10250/logs/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-95bbe0ec-7bnh:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:325

Issues about this test specifically: #35422

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:269
Expected error:
    <*errors.errorString | 0xc420abb130>: {
        s: "failed to get logs from pod-configmaps-7cc3181e-e728-11e6-924e-0242ac110007 for configmap-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-configmaps-7cc3181e-e728-11e6-924e-0242ac110007)",
    }
    failed to get logs from pod-configmaps-7cc3181e-e728-11e6-924e-0242ac110007 for configmap-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-configmaps-7cc3181e-e728-11e6-924e-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37515

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:62
Expected error:
    <*errors.errorString | 0xc420cef870>: {
        s: "failed to get logs from downward-api-a9484582-e728-11e6-b73e-0242ac110007 for dapi-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods downward-api-a9484582-e728-11e6-b73e-0242ac110007)",
    }
    failed to get logs from downward-api-a9484582-e728-11e6-b73e-0242ac110007 for dapi-container: an error on the server ("unknown") has prevented the request from succeeding (get pods downward-api-a9484582-e728-11e6-b73e-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113
Expected error:
    <*errors.errorString | 0xc420a0bc10>: {
        s: "failed to get logs from pod-44d89e1e-e728-11e6-af9e-0242ac110007 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-44d89e1e-e728-11e6-af9e-0242ac110007)",
    }
    failed to get logs from pod-44d89e1e-e728-11e6-af9e-0242ac110007 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-44d89e1e-e728-11e6-af9e-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34226

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:353
Timed out after 60.000s.
Expected success, but got an error:
    <*errors.errorString | 0xc420bc6910>: {
        s: "node condition \"TestCondition\" not found",
    }
    node condition "TestCondition" not found
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:347

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:547
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-df2kg run run-test --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc420722cc0 Waiting for pod e2e-tests-kubectl-df2kg/run-test-02nkz to be running, status is Pending, pod ready: false\n If you don't see a command prompt, try pressing enter.\nError attaching, falling back to logs: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?\nError from server (InternalError): an error on the server (\"unknown\") has prevented the request from succeeding (get pods run-test-02nkz)\n [] <nil> 0xc420925980 exit status 1 <nil> <nil> true [0xc4202d82c8 0xc4202d8428 0xc4202d8448] [0xc4202d82c8 0xc4202d8428 0xc4202d8448] [0xc4202d8358 0xc4202d8418 0xc4202d8440] [0x9727b0 0x9728b0 0x9728b0] 0xc420fe07e0 <nil>}:\nCommand stdout:\nWaiting for pod e2e-tests-kubectl-df2kg/run-test-02nkz to be running, status is Pending, pod ready: false\n\nstderr:\nIf you don't see a command prompt, try pressing enter.\nError attaching, falling back to logs: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?\nError from server (InternalError): an error on the server (\"unknown\") has prevented the request from succeeding (get pods run-test-02nkz)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-df2kg run run-test --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc420722cc0 Waiting for pod e2e-tests-kubectl-df2kg/run-test-02nkz to be running, status is Pending, pod ready: false
     If you don't see a command prompt, try pressing enter.
    Error attaching, falling back to logs: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
    Error from server (InternalError): an error on the server ("unknown") has prevented the request from succeeding (get pods run-test-02nkz)
     [] <nil> 0xc420925980 exit status 1 <nil> <nil> true [0xc4202d82c8 0xc4202d8428 0xc4202d8448] [0xc4202d82c8 0xc4202d8428 0xc4202d8448] [0xc4202d8358 0xc4202d8418 0xc4202d8440] [0x9727b0 0x9728b0 0x9728b0] 0xc420fe07e0 <nil>}:
    Command stdout:
    Waiting for pod e2e-tests-kubectl-df2kg/run-test-02nkz to be running, status is Pending, pod ready: false
    
    stderr:
    If you don't see a command prompt, try pressing enter.
    Error attaching, falling back to logs: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
    Error from server (InternalError): an error on the server ("unknown") has prevented the request from succeeding (get pods run-test-02nkz)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2067

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:40
Expected error:
    <*errors.errorString | 0xc42067d050>: {
        s: "failed to get logs from pod-secrets-ed02145a-e728-11e6-bc49-0242ac110007 for secret-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-secrets-ed02145a-e728-11e6-bc49-0242ac110007)",
    }
    failed to get logs from pod-secrets-ed02145a-e728-11e6-bc49-0242ac110007 for secret-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-secrets-ed02145a-e728-11e6-bc49-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35256

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Jan 30 12:13:22.499: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:149

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:507
Jan 30 12:17:21.440: Failed to open websocket to wss://35.184.36.127:443/api/v1/namespaces/e2e-tests-pods-7mstw/pods/pod-exec-websocket-17d361d9-e729-11e6-a094-0242ac110007/exec?command=cat&command=%2Fetc%2Fresolv.conf&container=main&stderr=1&stdout=1: websocket.Dial wss://35.184.36.127:443/api/v1/namespaces/e2e-tests-pods-7mstw/pods/pod-exec-websocket-17d361d9-e729-11e6-a094-0242ac110007/exec?command=cat&command=%2Fetc%2Fresolv.conf&container=main&stderr=1&stdout=1: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:477

Issues about this test specifically: #38308

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc420cd74d0>: {
        s: "failed to get logs from pod-34f87bee-e728-11e6-b21a-0242ac110007 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-34f87bee-e728-11e6-b21a-0242ac110007)",
    }
    failed to get logs from pod-34f87bee-e728-11e6-b21a-0242ac110007 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-34f87bee-e728-11e6-b21a-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37500

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:101
Expected error:
    <*errors.errorString | 0xc420135900>: {
        s: "failed to get logs from pod-81aa3425-e728-11e6-b998-0242ac110007 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-81aa3425-e728-11e6-b998-0242ac110007)",
    }
    failed to get logs from pod-81aa3425-e728-11e6-b998-0242ac110007 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-81aa3425-e728-11e6-b998-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37439

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:196
Expected error:
    <*errors.errorString | 0xc42066fdd0>: {
        s: "failed to get logs from downwardapi-volume-6bc92d96-e728-11e6-897b-0242ac110007 for client-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods downwardapi-volume-6bc92d96-e728-11e6-897b-0242ac110007)",
    }
    failed to get logs from downwardapi-volume-6bc92d96-e728-11e6-897b-0242ac110007 for client-container: an error on the server ("unknown") has prevented the request from succeeding (get pods downwardapi-volume-6bc92d96-e728-11e6-897b-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:59
Expected error:
    <*errors.errorString | 0xc42030bea0>: {
        s: "failed to get logs from pod-configmaps-34102004-e728-11e6-897b-0242ac110007 for configmap-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-configmaps-34102004-e728-11e6-897b-0242ac110007)",
    }
    failed to get logs from pod-configmaps-34102004-e728-11e6-897b-0242ac110007 for configmap-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-configmaps-34102004-e728-11e6-897b-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32949

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc42047bdb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:914
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config logs redis-master-nf1q3 redis-master --namespace=e2e-tests-kubectl-8xsg5] []  <nil>  Error from server (InternalError): an error on the server (\"unknown\") has prevented the request from succeeding (get pods redis-master-nf1q3)\n [] <nil> 0xc420e6f2f0 exit status 1 <nil> <nil> true [0xc42012ca20 0xc42012ca38 0xc42012ca58] [0xc42012ca20 0xc42012ca38 0xc42012ca58] [0xc42012ca30 0xc42012ca48] [0x9728b0 0x9728b0] 0xc420da7a40 <nil>}:\nCommand stdout:\n\nstderr:\nError from server (InternalError): an error on the server (\"unknown\") has prevented the request from succeeding (get pods redis-master-nf1q3)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config logs redis-master-nf1q3 redis-master --namespace=e2e-tests-kubectl-8xsg5] []  <nil>  Error from server (InternalError): an error on the server ("unknown") has prevented the request from succeeding (get pods redis-master-nf1q3)
     [] <nil> 0xc420e6f2f0 exit status 1 <nil> <nil> true [0xc42012ca20 0xc42012ca38 0xc42012ca58] [0xc42012ca20 0xc42012ca38 0xc42012ca58] [0xc42012ca30 0xc42012ca48] [0x9728b0 0x9728b0] 0xc420da7a40 <nil>}:
    Command stdout:
    
    stderr:
    Error from server (InternalError): an error on the server ("unknown") has prevented the request from succeeding (get pods redis-master-nf1q3)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2067

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:57
Expected error:
    <*errors.StatusError | 0xc420390100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \\\"gke-139e3e9e73e172ec2833\\\"?'\\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-95bbe0ec-7bnh:10250/metrics'\") has prevented the request from succeeding (get nodes gke-bootstrap-e2e-default-pool-95bbe0ec-7bnh:10250)",
            Reason: "InternalError",
            Details: {
                Name: "gke-bootstrap-e2e-default-pool-95bbe0ec-7bnh:10250",
                Group: "",
                Kind: "nodes",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-95bbe0ec-7bnh:10250/metrics'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-95bbe0ec-7bnh:10250/metrics'") has prevented the request from succeeding (get nodes gke-bootstrap-e2e-default-pool-95bbe0ec-7bnh:10250)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:55

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
Expected error:
    <*errors.errorString | 0xc420c8aaa0>: {
        s: "failed to get logs from pod-42714027-e729-11e6-bc49-0242ac110007 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-42714027-e729-11e6-bc49-0242ac110007)",
    }
    failed to get logs from pod-42714027-e729-11e6-bc49-0242ac110007 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-42714027-e729-11e6-bc49-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #33987

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc4214b5280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \\\"gke-139e3e9e73e172ec2833\\\"?'\\nTrying to reach: 'http://10.96.1.118:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?'\nTrying to reach: 'http://10.96.1.118:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?'\nTrying to reach: 'http://10.96.1.118:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:212

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:174
Expected error:
    <*errors.errorString | 0xc420a64a50>: {
        s: "failed to get logs from downward-api-a2b06c13-e728-11e6-aff7-0242ac110007 for dapi-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods downward-api-a2b06c13-e728-11e6-aff7-0242ac110007)",
    }
    failed to get logs from downward-api-a2b06c13-e728-11e6-aff7-0242ac110007 for dapi-container: an error on the server ("unknown") has prevented the request from succeeding (get pods downward-api-a2b06c13-e728-11e6-aff7-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 30 12:14:07.781: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.1.55:8080/dial?request=hostName&protocol=udp&host=10.96.1.39&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Jan 30 12:27:39.502: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:113
Expected error:
    <*errors.errorString | 0xc420eaee30>: {
        s: "failed to get logs from pod-host-path-test for test-container-2: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-host-path-test)",
    }
    failed to get logs from pod-host-path-test for test-container-2: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-host-path-test)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Jan 30 12:16:52.406: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-95bbe0ec-7bnh:10250/stats/?timeout=5m0s'") has prevented the request from succeeding an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-95bbe0ec-tnnp:10250/stats/?timeout=5m0s'") has prevented the request from succeeding an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-95bbe0ec-x0m1:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109
Expected error:
    <*errors.errorString | 0xc420ca6190>: {
        s: "failed to get logs from pod-345242c7-e728-11e6-b625-0242ac110007 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-345242c7-e728-11e6-b625-0242ac110007)",
    }
    failed to get logs from pod-345242c7-e728-11e6-b625-0242ac110007 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-345242c7-e728-11e6-b625-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37071

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Jan 30 12:16:11.356: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163

Issues about this test specifically: #30981

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:189
Expected error:
    <*errors.errorString | 0xc420f43590>: {
        s: "failed to get logs from downwardapi-volume-3439af2c-e728-11e6-b73e-0242ac110007 for client-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods downwardapi-volume-3439af2c-e728-11e6-b73e-0242ac110007)",
    }
    failed to get logs from downwardapi-volume-3439af2c-e728-11e6-b73e-0242ac110007 for client-container: an error on the server ("unknown") has prevented the request from succeeding (get pods downwardapi-volume-3439af2c-e728-11e6-b73e-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:218
Expected error:
    <*errors.StatusError | 0xc420f37880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \\\"gke-139e3e9e73e172ec2833\\\"?'\\nTrying to reach: 'http://10.96.2.17:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?'\nTrying to reach: 'http://10.96.2.17:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?'\nTrying to reach: 'http://10.96.2.17:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:243

Issues about this test specifically: #29933 #34111 #38765

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:393
failed to execute command in pod nfs-client, container nfs-client: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
Expected error:
    <*errors.StatusError | 0xc420b4b900>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:105

Issues about this test specifically: #36970

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:56
Expected error:
    <*errors.errorString | 0xc420c0d270>: {
        s: "failed to get logs from pod-secrets-34c33bb6-e728-11e6-a06a-0242ac110007 for secret-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-secrets-34c33bb6-e728-11e6-a06a-0242ac110007)",
    }
    failed to get logs from pod-secrets-34c33bb6-e728-11e6-a06a-0242ac110007 for secret-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-secrets-34c33bb6-e728-11e6-a06a-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37529

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:77
Expected error:
    <*errors.errorString | 0xc420a0b2f0>: {
        s: "failed to get logs from pod-secrets-6e5aa68e-e728-11e6-bb66-0242ac110007 for secret-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-secrets-6e5aa68e-e728-11e6-bb66-0242ac110007)",
    }
    failed to get logs from pod-secrets-6e5aa68e-e728-11e6-bb66-0242ac110007 for secret-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-secrets-6e5aa68e-e728-11e6-bb66-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37525

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc42023c9b0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Jan 30 12:13:55.269: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.96.1.31 8081
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:203
Expected error:
    <*errors.errorString | 0xc42072dbc0>: {
        s: "failed to get logs from downwardapi-volume-a1507cba-e728-11e6-8215-0242ac110007 for client-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods downwardapi-volume-a1507cba-e728-11e6-8215-0242ac110007)",
    }
    failed to get logs from downwardapi-volume-a1507cba-e728-11e6-8215-0242ac110007 for client-container: an error on the server ("unknown") has prevented the request from succeeding (get pods downwardapi-volume-a1507cba-e728-11e6-8215-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37531

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85
Expected error:
    <*errors.errorString | 0xc420b0d360>: {
        s: "failed to get logs from pod-5d89fbe6-e728-11e6-b998-0242ac110007 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-5d89fbe6-e728-11e6-b998-0242ac110007)",
    }
    failed to get logs from pod-5d89fbe6-e728-11e6-b998-0242ac110007 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-5d89fbe6-e728-11e6-b998-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34658

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:226
Expected error:
    <*errors.errorString | 0xc4203ac9d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:204

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Timed out after 120.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
	<*errors.StatusError>: &errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"unknown\") has prevented the request from succeeding (get pods annotationupdatec93c708c-e728-11e6-a094-0242ac110007)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc4212cebe0), Code:500}}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:142

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240
Expected error:
    <*errors.errorString | 0xc420b90230>: {
        s: "failed to get logs from pod-service-account-bec96885-e728-11e6-b73e-0242ac110007-8t287 for token-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-service-account-bec96885-e728-11e6-b73e-0242ac110007-8t287)",
    }
    failed to get logs from pod-service-account-bec96885-e728-11e6-b73e-0242ac110007-8t287 for token-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-service-account-bec96885-e728-11e6-b73e-0242ac110007-8t287)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37526

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Jan 30 12:24:33.582: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38083

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config logs redis-master-flnzx redis-master --namespace=e2e-tests-kubectl-bl8wg] []  <nil>  Error from server (InternalError): an error on the server (\"unknown\") has prevented the request from succeeding (get pods redis-master-flnzx)\n [] <nil> 0xc4206d84b0 exit status 1 <nil> <nil> true [0xc420384248 0xc420384260 0xc420384278] [0xc420384248 0xc420384260 0xc420384278] [0xc420384258 0xc420384270] [0x9728b0 0x9728b0] 0xc420dfb740 <nil>}:\nCommand stdout:\n\nstderr:\nError from server (InternalError): an error on the server (\"unknown\") has prevented the request from succeeding (get pods redis-master-flnzx)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config logs redis-master-flnzx redis-master --namespace=e2e-tests-kubectl-bl8wg] []  <nil>  Error from server (InternalError): an error on the server ("unknown") has prevented the request from succeeding (get pods redis-master-flnzx)
     [] <nil> 0xc4206d84b0 exit status 1 <nil> <nil> true [0xc420384248 0xc420384260 0xc420384278] [0xc420384248 0xc420384260 0xc420384278] [0xc420384258 0xc420384270] [0x9728b0 0x9728b0] 0xc420dfb740 <nil>}:
    Command stdout:
    
    stderr:
    Error from server (InternalError): an error on the server ("unknown") has prevented the request from succeeding (get pods redis-master-flnzx)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2067

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:564
Jan 30 12:12:47.142: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:64
Expected error:
    <*errors.errorString | 0xc420cd5ef0>: {
        s: "failed to get logs from pod-configmaps-3423e520-e728-11e6-a493-0242ac110007 for configmap-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-configmaps-3423e520-e728-11e6-a493-0242ac110007)",
    }
    failed to get logs from pod-configmaps-3423e520-e728-11e6-a493-0242ac110007 for configmap-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-configmaps-3423e520-e728-11e6-a493-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35790

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:564
Jan 30 12:17:14.828: Failed to open websocket to wss://35.184.36.127:443/api/v1/namespaces/e2e-tests-pods-6c7br/pods/pod-logs-websocket-14e85737-e729-11e6-aff7-0242ac110007/log?container=main: websocket.Dial wss://35.184.36.127:443/api/v1/namespaces/e2e-tests-pods-6c7br/pods/pod-logs-websocket-14e85737-e729-11e6-aff7-0242ac110007/log?container=main: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:544

Issues about this test specifically: #30263

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:65
failed to execute command in pod hostexec, container hostexec: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
Expected error:
    <*errors.StatusError | 0xc4201d8680>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:105

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:493
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-lbdss exec nginx -- /bin/sh -c exit 0] []  <nil>  Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?\n [] <nil> 0xc4211b1470 exit status 1 <nil> <nil> true [0xc420c62028 0xc420c62040 0xc420c62058] [0xc420c62028 0xc420c62040 0xc420c62058] [0xc420c62038 0xc420c62050] [0x9728b0 0x9728b0] 0xc4211724e0 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-lbdss exec nginx -- /bin/sh -c exit 0] []  <nil>  Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
     [] <nil> 0xc4211b1470 exit status 1 <nil> <nil> true [0xc420c62028 0xc420c62040 0xc420c62058] [0xc420c62028 0xc420c62040 0xc420c62058] [0xc420c62038 0xc420c62050] [0x9728b0 0x9728b0] 0xc4211724e0 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:54
failed to execute command in pod test-pod, container busybox-1: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
Expected error:
    <*errors.StatusError | 0xc4210d5680>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-139e3e9e73e172ec2833\"?",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:105

Issues about this test specifically: #37502

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
validating pre-stop.
Expected error:
    <*errors.errorString | 0xc4204138c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:159

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc420788f10>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:477

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:261
kubelet never observed the termination notice
Expected error:
    <*errors.errorString | 0xc4203aad80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:231

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Expected error:
    <*errors.errorString | 0xc4208acc50>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
Expected error:
    <*errors.errorString | 0xc4212ece40>: {
        s: "failed to get logs from pod-secrets-a154e93f-e728-11e6-897b-0242ac110007 for secret-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-secrets-a154e93f-e728-11e6-897b-0242ac110007)",
    }
    failed to get logs from pod-secrets-a154e93f-e728-11e6-897b-0242ac110007 for secret-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-secrets-a154e93f-e728-11e6-897b-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc4202eb940>: {
        s: "failed to get logs from pod-5e1e4847-e728-11e6-897b-0242ac110007 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-5e1e4847-e728-11e6-897b-0242ac110007)",
    }
    failed to get logs from pod-5e1e4847-e728-11e6-897b-0242ac110007 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-5e1e4847-e728-11e6-897b-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #30851

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:34
Expected error:
    <*errors.errorString | 0xc420d55130>: {
        s: "failed to get logs from client-containers-6a37a7a0-e728-11e6-9644-0242ac110007 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods client-containers-6a37a7a0-e728-11e6-9644-0242ac110007)",
    }
    failed to get logs from client-containers-6a37a7a0-e728-11e6-9644-0242ac110007 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods client-containers-6a37a7a0-e728-11e6-9644-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34520

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:287
Jan 30 12:11:06.321: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
Expected error:
    <*errors.errorString | 0xc420a4d430>: {
        s: "failed to get logs from pod-secrets-bb6f70e9-e728-11e6-b21a-0242ac110007 for secret-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-secrets-bb6f70e9-e728-11e6-b21a-0242ac110007)",
    }
    failed to get logs from pod-secrets-bb6f70e9-e728-11e6-b21a-0242ac110007 for secret-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-secrets-bb6f70e9-e728-11e6-b21a-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29221

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc420f77e00>: {
        s: "failed to get logs from pod-8548e7e4-e728-11e6-bc2e-0242ac110007 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-8548e7e4-e728-11e6-bc2e-0242ac110007)",
    }
    failed to get logs from pod-8548e7e4-e728-11e6-bc2e-0242ac110007 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-8548e7e4-e728-11e6-bc2e-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #31400

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:88
Expected error:
    <*errors.errorString | 0xc420412630>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:76

Issues about this test specifically: #26191

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:42
Expected error:
    <*errors.errorString | 0xc420a98df0>: {
        s: "failed to get logs from pod-configmaps-f6714a82-e728-11e6-b21a-0242ac110007 for configmap-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-configmaps-f6714a82-e728-11e6-b21a-0242ac110007)",
    }
    failed to get logs from pod-configmaps-f6714a82-e728-11e6-b21a-0242ac110007 for configmap-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-configmaps-f6714a82-e728-11e6-b21a-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34827

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Jan 30 12:17:04.102: Timed out in 30s: failed executing cmd curl -q -s --connect-timeout 1 http://localhost:10249/healthz in e2e-tests-nettest-dqqtx/host-test-container-pod: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-nettest-dqqtx host-test-container-pod -- /bin/sh -c curl -q -s --connect-timeout 1 http://localhost:10249/healthz] []  <nil>  Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?
 [] <nil> 0xc4207233e0 exit status 1 <nil> <nil> true [0xc42134e660 0xc42134e678 0xc42134e690] [0xc42134e660 0xc42134e678 0xc42134e690] [0xc42134e670 0xc42134e688] [0x9728b0 0x9728b0] 0xc42072ef00 <nil>}:
Command stdout:

stderr:
Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-139e3e9e73e172ec2833"?

error:
exit status 1

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:297

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc420c11080>: {
        s: "failed to get logs from pod-348df1f6-e728-11e6-8215-0242ac110007 for 

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4349/
Multiple broken tests:

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc4206e6060>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc420b4c010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85
wait for pod "pod-0ec4e305-e91b-11e6-956b-0242ac11000a" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420412c80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #34658

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc420f9c010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc42089e010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:953

Issues about this test specifically: #29629 #36270 #37462

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203aab90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203fae30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:393
Expected error:
    <*errors.errorString | 0xc42038a310>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36970

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  1 23:40:20.243: Couldn't delete ns: "e2e-tests-kubectl-2b2kb": namespace e2e-tests-kubectl-2b2kb was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-2b2kb was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:43
wait for pod "client-containers-37e17e87-e91a-11e6-a53b-0242ac11000a" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203d3600>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #36706

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc420452e50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.160 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-xrskb] []  0xc420706f20 Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc4209d1830 exit status 1 <nil> <nil> true [0xc4200369a8 0xc4200369d0 0xc4200369e0] [0xc4200369a8 0xc4200369d0 0xc4200369e0] [0xc4200369b0 0xc4200369c8 0xc4200369d8] [0x972560 0x972660 0x972660] 0xc420f756e0 <nil>}:\nCommand stdout:\nCreated update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.69.160 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-xrskb] []  0xc420706f20 Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc4209d1830 exit status 1 <nil> <nil> true [0xc4200369a8 0xc4200369d0 0xc4200369e0] [0xc4200369a8 0xc4200369d0 0xc4200369e0] [0xc4200369b0 0xc4200369c8 0xc4200369d8] [0x972560 0x972660 0x972660] 0xc420f756e0 <nil>}:
    Command stdout:
    Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2067

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203ec150>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc420686020>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:54
Expected error:
    <*errors.errorString | 0xc42043ad30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #37502

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203fcd30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32375

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc420c051a0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:17, Replicas:11, UpdatedReplicas:6, AvailableReplicas:9, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621617731, nsec:0, loc:(*time.Location)(0x3cf0220)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621617731, nsec:0, loc:(*time.Location)(0x3cf0220)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621617774, nsec:0, loc:(*time.Location)(0x3cf0220)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621617774, nsec:0, loc:(*time.Location)(0x3cf0220)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-4093820370\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:17, Replicas:11, UpdatedReplicas:6, AvailableReplicas:9, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621617731, nsec:0, loc:(*time.Location)(0x3cf0220)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621617731, nsec:0, loc:(*time.Location)(0x3cf0220)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621617774, nsec:0, loc:(*time.Location)(0x3cf0220)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621617774, nsec:0, loc:(*time.Location)(0x3cf0220)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-4093820370\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4437/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 17:13:05.627: Couldn't delete ns: "e2e-tests-horizontal-pod-autoscaling-bzrtx": the server cannot complete the requested operation at this time, try again later (get resourcequotas) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get resourcequotas)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc420990140), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Failed to delete server pod: the server cannot complete the requested operation at this time, try again later (delete pods gluster-server)
Expected error:
    <*errors.StatusError | 0xc42155c100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server cannot complete the requested operation at this time, try again later (delete pods gluster-server)",
            Reason: "ServerTimeout",
            Details: {
                Name: "gluster-server",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The  operation against  could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 504,
        },
    }
    the server cannot complete the requested operation at this time, try again later (delete pods gluster-server)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:187

Issues about this test specifically: #37056

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 17:12:43.420: Couldn't delete ns: "e2e-tests-services-whfpc": the server cannot complete the requested operation at this time, try again later (get replicationcontrollers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get replicationcontrollers.extensions)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc4206667d0), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #29831

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 17:12:43.156: Couldn't delete ns: "e2e-tests-disruption-gjl98": the server cannot complete the requested operation at this time, try again later (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get statefulsets.apps)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc4206941e0), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 17:12:45.941: Couldn't delete ns: "e2e-tests-kubectl-bpfd1": the server cannot complete the requested operation at this time, try again later (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get statefulsets.apps)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc4208fb7c0), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 17:13:12.621: Couldn't delete ns: "e2e-tests-v1job-bbk0g": the server cannot complete the requested operation at this time, try again later (get jobs.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get jobs.extensions)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc420eec230), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 17:12:39.117: Couldn't delete ns: "e2e-tests-resourcequota-50bfn": the server cannot complete the requested operation at this time, try again later (get resourcequotas) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get resourcequotas)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc420e8a0a0), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #34367

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 17:12:42.766: Couldn't delete ns: "e2e-tests-job-s1lk7": the server cannot complete the requested operation at this time, try again later (get persistentvolumeclaims) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get persistentvolumeclaims)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc420c1e0a0), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 17:12:42.290: Couldn't delete ns: "e2e-tests-dns-0t2wz": the server cannot complete the requested operation at this time, try again later (delete namespaces e2e-tests-dns-0t2wz) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (delete namespaces e2e-tests-dns-0t2wz)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc420a64320), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32584

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Feb  3 17:23:05.991: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 17:13:25.526: Couldn't delete ns: "e2e-tests-deployment-3sx5b": the server cannot complete the requested operation at this time, try again later (get configmaps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get configmaps)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc420d925f0), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #36265 #36353 #36628

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-release-1.5/4507/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:65
Expected error:
    <*errors.errorString | 0xc42044ee60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420dedf40>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:15, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621894260, nsec:0, loc:(*time.Location)(0x3cf0220)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621894260, nsec:0, loc:(*time.Location)(0x3cf0220)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:15, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63621894260, nsec:0, loc:(*time.Location)(0x3cf0220)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63621894260, nsec:0, loc:(*time.Location)(0x3cf0220)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Expected error:
    <*errors.errorString | 0xc4203d1230>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3602

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:54
wait for pod "client-containers-cdbd1f55-eb9d-11e6-9672-0242ac110002" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203a5580>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #29994

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:203
Expected error:
    <*errors.errorString | 0xc42108e6d0>: {
        s: "expected pod \"downwardapi-volume-a6cfca47-eb9d-11e6-a7a6-0242ac110002\" success: gave up waiting for pod 'downwardapi-volume-a6cfca47-eb9d-11e6-a7a6-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-a6cfca47-eb9d-11e6-a7a6-0242ac110002" success: gave up waiting for pod 'downwardapi-volume-a6cfca47-eb9d-11e6-a7a6-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37531

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc420412ec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
wait for pod "pod-c9867c50-eb9d-11e6-adcf-0242ac110002" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420352da0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420ef08d0>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc42064aaa0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:318

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
wait for pod "downwardapi-volume-a7f3b495-eb9d-11e6-98a7-0242ac110002" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203c2e40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #37423

@spxtr spxtr closed this as completed Feb 7, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

2 participants