Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-staging-parallel: broken test run #37519

Closed
k8s-github-robot opened this issue Nov 26, 2016 · 16 comments
Closed

ci-kubernetes-e2e-gci-gke-staging-parallel: broken test run #37519

k8s-github-robot opened this issue Nov 26, 2016 · 16 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/561/

Multiple broken tests:

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:209
Expected error:
    <*errors.errorString | 0xc8200e57c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:274
Expected
    <*errors.errorString | 0xc8201aa880>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:263

Issues about this test specifically: #31408

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
Expected error:
    <*errors.errorString | 0xc8201056a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:108

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:843
Nov 26 15:38:18.957: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:202

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc8200db7c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #28337

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Nov 26, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/573/

Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc8201116a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc8200ed7c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Nov 26 19:46:22.854: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1509

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:538
Nov 26 19:38:00.982: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:535

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc82017cb60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8201b2a50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.errorString | 0xc8207fedf0>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-11-26 19:39:28 -0800 PST FinishedAt:2016-11-26 19:39:58 -0800 PST ContainerID:docker://9eac25319ec9300b60edd8cb57bdbd511de4153ac936ac1a84ca02c885b31f09}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-11-26 19:39:28 -0800 PST FinishedAt:2016-11-26 19:39:58 -0800 PST ContainerID:docker://9eac25319ec9300b60edd8cb57bdbd511de4153ac936ac1a84ca02c885b31f09}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:54

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1071
Nov 26 19:45:04.860: expected un-ready endpoint for Service webserver within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1069

Issues about this test specifically: #26172

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Nov 26 19:54:52.921: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:284

Issues about this test specifically: #27443 #27835 #28900 #32512

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/602/

Multiple broken tests:

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected error:
    <*errors.errorString | 0xc8200e57c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:73

Issues about this test specifically: #28346

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc820c860a0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:550

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:59
Expected error:
    <*errors.errorString | 0xc820bd2070>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:308

Issues about this test specifically: #31075 #36286

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:40
Expected error:
    <*errors.errorString | 0xc82010d6a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:109

Issues about this test specifically: #30981

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47
Expected error:
    <*errors.errorString | 0xc820184b90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:109

Issues about this test specifically: #32023

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:344
Nov 27 04:59:16.224: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:297

Issues about this test specifically: #27673

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Nov 27 04:58:29.960: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2101

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:209
Expected error:
    <*errors.errorString | 0xc8200e17c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:287
Expected error:
    <*errors.errorString | 0xc820bd6260>: {
        s: "expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-65cf1924-b4a0-11e6-9881-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-65cf1924-b4a0-11e6-9881-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Issues about this test specifically: #29751 #30430

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:149
Expected error:
    <*errors.errorString | 0xc8201822e0>: {
        s: "expected container secret-volume-test success: gave up waiting for pod 'pod-secrets-6dd90158-b4a0-11e6-b2d3-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected container secret-volume-test success: gave up waiting for pod 'pod-secrets-6dd90158-b4a0-11e6-b2d3-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Issues about this test specifically: #35256

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc820d30080>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1084

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/724/

Multiple broken tests:

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:62
Expected error:
    <*errors.errorString | 0xc8202ddb40>: {
        s: "failed to get logs from downward-api-8433e9e0-b5ef-11e6-a4f6-0242ac110006 for dapi-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-btd7:10250/containerLogs/e2e-tests-downward-api-qv2xr/downward-api-8433e9e0-b5ef-11e6-a4f6-0242ac110006/dapi-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
    }
    failed to get logs from downward-api-8433e9e0-b5ef-11e6-a4f6-0242ac110006 for dapi-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-btd7:10250/containerLogs/e2e-tests-downward-api-qv2xr/downward-api-8433e9e0-b5ef-11e6-a4f6-0242ac110006/dapi-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:70
Expected error:
    <*errors.errorString | 0xc8209d0670>: {
        s: "failed to get logs from var-expansion-87daf4e7-b5ef-11e6-989d-0242ac110006 for dapi-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-ja7x:10250/containerLogs/e2e-tests-var-expansion-08xx6/var-expansion-87daf4e7-b5ef-11e6-989d-0242ac110006/dapi-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
    }
    failed to get logs from var-expansion-87daf4e7-b5ef-11e6-989d-0242ac110006 for dapi-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-ja7x:10250/containerLogs/e2e-tests-var-expansion-08xx6/var-expansion-87daf4e7-b5ef-11e6-989d-0242ac110006/dapi-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Issues about this test specifically: #29461

Failed: [k8s.io] Downward API volume should provide container's cpu request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:175
Expected error:
    <*errors.errorString | 0xc820b8f6d0>: {
        s: "failed to get logs from downwardapi-volume-846a2737-b5ef-11e6-89f3-0242ac110006 for client-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-ja7x:10250/containerLogs/e2e-tests-downward-api-adp0b/downwardapi-volume-846a2737-b5ef-11e6-89f3-0242ac110006/client-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
    }
    failed to get logs from downwardapi-volume-846a2737-b5ef-11e6-89f3-0242ac110006 for client-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-ja7x:10250/containerLogs/e2e-tests-downward-api-adp0b/downwardapi-volume-846a2737-b5ef-11e6-89f3-0242ac110006/client-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:526
Nov 28 20:52:37.411: Failed to open websocket to wss://104.198.248.2:443/api/v1/namespaces/e2e-tests-pods-isf6n/pods/pod-exec-websocket-a4a1b97d-b5ef-11e6-a4f6-0242ac110006/exec?command=cat&command=%2Fetc%2Fresolv.conf&container=main&stderr=1&stdout=1: websocket.Dial wss://104.198.248.2:443/api/v1/namespaces/e2e-tests-pods-isf6n/pods/pod-exec-websocket-a4a1b97d-b5ef-11e6-a4f6-0242ac110006/exec?command=cat&command=%2Fetc%2Fresolv.conf&container=main&stderr=1&stdout=1: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:496

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc8208477d0>: {
        s: "failed to get logs from pod-85372e0c-b5ef-11e6-a3fc-0242ac110006 for test-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-ja7x:10250/containerLogs/e2e-tests-emptydir-gvnqi/pod-85372e0c-b5ef-11e6-a3fc-0242ac110006/test-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
    }
    failed to get logs from pod-85372e0c-b5ef-11e6-a3fc-0242ac110006 for test-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-ja7x:10250/containerLogs/e2e-tests-emptydir-gvnqi/pod-85372e0c-b5ef-11e6-a3fc-0242ac110006/test-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:42
Expected error:
    <*errors.errorString | 0xc8208641c0>: {
        s: "failed to get logs from pod-configmaps-894a963f-b5ef-11e6-9773-0242ac110006 for configmap-volume-test: Get https://gke-bootstrap-e2e-default-pool-d679d784-btd7:10250/containerLogs/e2e-tests-configmap-r2vce/pod-configmaps-894a963f-b5ef-11e6-9773-0242ac110006/configmap-volume-test: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
    }
    failed to get logs from pod-configmaps-894a963f-b5ef-11e6-9773-0242ac110006 for configmap-volume-test: Get https://gke-bootstrap-e2e-default-pool-d679d784-btd7:10250/containerLogs/e2e-tests-configmap-r2vce/pod-configmaps-894a963f-b5ef-11e6-9773-0242ac110006/configmap-volume-test: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Issues about this test specifically: #34827

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:68
Expected error:
    <*errors.StatusError | 0xc820f1a280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \\\"gke-29a9ce65e945248ca4a5\\\"?'\\nTrying to reach: 'https://10.240.0.3:10250/logs/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?'\nTrying to reach: 'https://10.240.0.3:10250/logs/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?'\nTrying to reach: 'https://10.240.0.3:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:332

Issues about this test specifically: #35422

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:43
Expected error:
    <*errors.errorString | 0xc821165630>: {
        s: "failed to get logs from client-containers-9b8f67fc-b5ef-11e6-9b3c-0242ac110006 for test-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-btd7:10250/containerLogs/e2e-tests-containers-d0gjf/client-containers-9b8f67fc-b5ef-11e6-9b3c-0242ac110006/test-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
    }
    failed to get logs from client-containers-9b8f67fc-b5ef-11e6-9b3c-0242ac110006 for test-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-btd7:10250/containerLogs/e2e-tests-containers-d0gjf/client-containers-9b8f67fc-b5ef-11e6-9b3c-0242ac110006/test-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Issues about this test specifically: #36706

Failed: [k8s.io] Downward API volume should provide container's cpu limit {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:157
Expected error:
    <*errors.errorString | 0xc820d72420>: {
        s: "failed to get logs from downwardapi-volume-9bf77e2b-b5ef-11e6-a0d8-0242ac110006 for client-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-btd7:10250/containerLogs/e2e-tests-downward-api-ve5h7/downwardapi-volume-9bf77e2b-b5ef-11e6-a0d8-0242ac110006/client-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
    }
    failed to get logs from downwardapi-volume-9bf77e2b-b5ef-11e6-a0d8-0242ac110006 for client-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-btd7:10250/containerLogs/e2e-tests-downward-api-ve5h7/downwardapi-volume-9bf77e2b-b5ef-11e6-a0d8-0242ac110006/client-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Failed: [k8s.io] Secrets should be consumable from pods in volume with Mode set in the item [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:215
Expected error:
    <*errors.errorString | 0xc820b37240>: {
        s: "failed to get logs from pod-secrets-84f11aab-b5ef-11e6-aa10-0242ac110006 for secret-volume-test: Get https://gke-bootstrap-e2e-default-pool-d679d784-btd7:10250/containerLogs/e2e-tests-secrets-qerzz/pod-secrets-84f11aab-b5ef-11e6-aa10-0242ac110006/secret-volume-test: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
    }
    failed to get logs from pod-secrets-84f11aab-b5ef-11e6-aa10-0242ac110006 for secret-volume-test: Get https://gke-bootstrap-e2e-default-pool-d679d784-btd7:10250/containerLogs/e2e-tests-secrets-qerzz/pod-secrets-84f11aab-b5ef-11e6-aa10-0242ac110006/secret-volume-test: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Issues about this test specifically: #31969

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:101
Expected error:
    <*errors.errorString | 0xc8209b1ba0>: {
        s: "failed to get logs from pod-8440e6d7-b5ef-11e6-9dad-0242ac110006 for test-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-ja7x:10250/containerLogs/e2e-tests-emptydir-0j1pd/pod-8440e6d7-b5ef-11e6-9dad-0242ac110006/test-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
    }
    failed to get logs from pod-8440e6d7-b5ef-11e6-9dad-0242ac110006 for test-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-ja7x:10250/containerLogs/e2e-tests-emptydir-0j1pd/pod-8440e6d7-b5ef-11e6-9dad-0242ac110006/test-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Issues about this test specifically: #37439

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:52
post request failed
Expected error:
    <*errors.StatusError | 0xc820797f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:58

Issues about this test specifically: #27023 #34604

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85
Expected error:
    <*errors.errorString | 0xc82055fbd0>: {
        s: "failed to get logs from pod-9230b71b-b5ef-11e6-9dad-0242ac110006 for test-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-ja7x:10250/containerLogs/e2e-tests-emptydir-qkiff/pod-9230b71b-b5ef-11e6-9dad-0242ac110006/test-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
    }
    failed to get logs from pod-9230b71b-b5ef-11e6-9dad-0242ac110006 for test-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-ja7x:10250/containerLogs/e2e-tests-emptydir-qkiff/pod-9230b71b-b5ef-11e6-9dad-0242ac110006/test-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Issues about this test specifically: #34658

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:129
Expected error:
    <*errors.errorString | 0xc820bc6b10>: {
        s: "failed to get logs from downward-api-8f0d6378-b5ef-11e6-a67d-0242ac110006 for dapi-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-bf2a:10250/containerLogs/e2e-tests-downward-api-wlxfj/downward-api-8f0d6378-b5ef-11e6-a67d-0242ac110006/dapi-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?",
    }
    failed to get logs from downward-api-8f0d6378-b5ef-11e6-a67d-0242ac110006 for dapi-container: Get https://gke-bootstrap-e2e-default-pool-d679d784-bf2a:10250/containerLogs/e2e-tests-downward-api-wlxfj/downward-api-8f0d6378-b5ef-11e6-a67d-0242ac110006/dapi-container: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-29a9ce65e945248ca4a5"?
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Issues about this test specifically: #35590

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:473
Nov 28 20:52:06.877: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
Expected error:
    <*errors.StatusError | 0xc820023e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \\\"gke-29a9ce65e945248ca4a5\\\"?'\\nTrying to reach: 'https://10.240.0.3:10250/logs/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?'\nTrying to reach: 'https://10.240.0.3:10250/logs/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-29a9ce65e945248ca4a5\"?'\nTrying to reach: 'https://10.240.0.3:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:332

Issues about this test specifically: #32936

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging-parallel/790/

Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:333
Expected error:
    <*errors.errorString | 0xc820fcca50>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:292

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:462
Expected error:
    <*errors.errorString | 0xc8200e57c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:4094

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected error:
    <*errors.errorString | 0xc8200e57c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:73

Issues about this test specifically: #28346

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Nov 29 22:54:18.808: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2101

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc8200f57b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #28337

Failed: [k8s.io] Downward API should provide pod IP as an env var {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:83
Expected error:
    <*errors.errorString | 0xc82058f9e0>: {
        s: "expected container dapi-container success: gave up waiting for pod 'downward-api-f1463cb6-b6c8-11e6-932e-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected container dapi-container success: gave up waiting for pod 'downward-api-f1463cb6-b6c8-11e6-932e-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc820ea8670>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:345

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc8203a4f30>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "nginx" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1151

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Dec 1, 2016
@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@ixdy ixdy closed this as completed Dec 1, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

2 participants