Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-prod-parallel: broken test run #40704

Closed
k8s-github-robot opened this issue Jan 30, 2017 · 26 comments
Closed

ci-kubernetes-e2e-gci-gke-prod-parallel: broken test run #40704

k8s-github-robot opened this issue Jan 30, 2017 · 26 comments
Assignees
Labels
area/provider/gcp Issues or PRs related to gcp provider kind/flake Categorizes issue or PR as related to a flaky test. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/4540/
Multiple broken tests:

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 30 12:12:38.285: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:88
Expected error:
    <*errors.errorString | 0xc4203d7510>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:76

Issues about this test specifically: #26191

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420451bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:353
Timed out after 60.000s.
Expected success, but got an error:
    <*errors.errorString | 0xc420a88720>: {
        s: "node condition \"TestCondition\" not found",
    }
    node condition "TestCondition" not found
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:347

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Previous issues for this suite: #37173 #37815 #38395 #39233

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Jan 30, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/4581/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Jan 31 12:16:10.083: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:43
wait for pod "client-containers-5f4720fc-e7f1-11e6-aabf-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203ace80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #36706

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
Expected error:
    <*errors.errorString | 0xc420f5e880>: {
        s: "expected pod \"pod-583f0aea-e7f1-11e6-9525-0242ac110007\" success: pod \"pod-583f0aea-e7f1-11e6-9525-0242ac110007\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-31 12:10:43 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-31 12:10:43 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-31 12:10:43 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP: StartTime:2017-01-31 12:10:43 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420fd6a10} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/mounttest-user:0.3 ImageID:docker://sha256:0d7084cc3178b2f316de73f3a58d491fa9a7f1f4b48423bc652b767e8eed4dfc ContainerID:docker://46d03aa603a73c6680e2bcb64ad74b26d381cd06a1facc6d29e71daeab21c868}]}",
    }
    expected pod "pod-583f0aea-e7f1-11e6-9525-0242ac110007" success: pod "pod-583f0aea-e7f1-11e6-9525-0242ac110007" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-31 12:10:43 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-31 12:10:43 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-31 12:10:43 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP: StartTime:2017-01-31 12:10:43 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420fd6a10} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/mounttest-user:0.3 ImageID:docker://sha256:0d7084cc3178b2f316de73f3a58d491fa9a7f1f4b48423bc652b767e8eed4dfc ContainerID:docker://46d03aa603a73c6680e2bcb64ad74b26d381cd06a1facc6d29e71daeab21c868}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 31 12:28:41.464: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:220
Jan 31 12:13:00.898: Pod did not start running: pod ran to completion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:184

Issues about this test specifically: #26955

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc42038f210>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc420e568a0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Expected error:
    <*errors.errorString | 0xc42038f210>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #32087

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 31 12:22:04.978: Couldn't delete ns: "e2e-tests-disruption-xkls4": namespace e2e-tests-disruption-xkls4 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-xkls4 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 31 12:19:10.495: Couldn't delete ns: "e2e-tests-disruption-w2xph": namespace e2e-tests-disruption-w2xph was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-w2xph was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32639

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Waiting for pods in namespace "e2e-tests-disruption-v2s1s" to be ready
Expected error:
    <*errors.errorString | 0xc4203ab210>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:247

Issues about this test specifically: #32644

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Jan 31 12:20:45.183: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38254

@calebamiles calebamiles modified the milestone: v1.6 Mar 3, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/7744/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:180
wait for pod "downwardapi-volume-4d68d37d-04e5-11e7-9f5b-0242ac110006" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203c8620>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc4203d1610>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <*errors.errorString | 0xc4203fb710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1749

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420f34030>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
Expected
    <*errors.errorString | 0xc4203fc8d0>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:147

Issues about this test specifically: #31873

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203d0ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:507
Expected error:
    <*errors.errorString | 0xc4203c0ef0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #38308

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  9 08:35:25.098: Couldn't delete ns: "e2e-tests-disruption-tzjz7": namespace e2e-tests-disruption-tzjz7 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-tzjz7 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203cec50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32375

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
wait for pod "pod-secrets-4db26666-04e5-11e7-929c-0242ac110006" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203c3440>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #29221

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc420440ec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
wait for pod "pod-4d627c75-04e5-11e7-9080-0242ac110006" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203c4bb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #30851

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:74
Expected error:
    <*errors.errorString | 0xc420fac450>: {
        s: "want pod 'test-webserver-4d0de803-04e5-11e7-a7f6-0242ac110006' on 'gke-bootstrap-e2e-default-pool-7d9af1bb-5x6b' to be 'Running' but was 'Pending'",
    }
    want pod 'test-webserver-4d0de803-04e5-11e7-a7f6-0242ac110006' on 'gke-bootstrap-e2e-default-pool-7d9af1bb-5x6b' to be 'Running' but was 'Pending'
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:56

Issues about this test specifically: #29521

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:66
Expected error:
    <*errors.errorString | 0xc4203d0100>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:283

Issues about this test specifically: #37144

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  9 08:34:16.255: Couldn't delete ns: "e2e-tests-v1job-fmnpp": namespace e2e-tests-v1job-fmnpp was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-v1job-fmnpp was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Mar  9 08:33:18.569: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc420485bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc4201aa060>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

@smarterclayton
Copy link
Contributor

Failure on secrets in volumes

5 minute sync pod latency? Seems bad - maybe correlated to the other issue I just mentioned @kubernetes/sig-node-bugs in

I0309 08:34:37.464] Latency metrics for node gke-bootstrap-e2e-default-pool-7d9af1bb-5x6b
I0309 08:34:37.464] Mar  9 08:33:06.558: INFO: {Operation:SyncPod Method:container_manager_latency_microseconds Quantile:0.99 Latency:5m2.685702s}
I0309 08:34:37.464] Mar  9 08:33:06.558: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:1m52.242888s}
I0309 08:34:37.464] Mar  9 08:33:06.558: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.9 Latency:11.633863s}
I0309 08:34:37.464] Mar  9 08:33:06.558: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:11.633863s}
I0309 08:34:37.464] Mar  9 08:33:06.558: INFO: 

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/7809/
Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Expected error:
    <*errors.errorString | 0xc420f4c600>: {
        s: "Failed to get pod \"pet-1\": the server cannot complete the requested operation at this time, try again later (get pods pet-1)",
    }
    Failed to get pod "pet-1": the server cannot complete the requested operation at this time, try again later (get pods pet-1)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #38254

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 10 23:55:46.041: Couldn't delete ns: "e2e-tests-pods-dxl10": the server cannot complete the requested operation at this time, try again later (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get statefulsets.apps)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc4208fd130), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #38308

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.StatusError | 0xc420df0b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server cannot complete the requested operation at this time, try again later (get deployments.extensions nginx)",
            Reason: "ServerTimeout",
            Details: {
                Name: "nginx",
                Group: "extensions",
                Kind: "deployments",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The  operation against  could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 504,
        },
    }
    the server cannot complete the requested operation at this time, try again later (get deployments.extensions nginx)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1320

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Expected error:
    <*errors.errorString | 0xc420956400>: {
        s: "Failed to get pod \"pet-1\": the server cannot complete the requested operation at this time, try again later (get pods pet-1)",
    }
    Failed to get pod "pet-1": the server cannot complete the requested operation at this time, try again later (get pods pet-1)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:106
Expected error:
    <*errors.errorString | 0xc4208ae840>: {
        s: "the server cannot complete the requested operation at this time, try again later (delete statefulsets.apps pet)",
    }
    the server cannot complete the requested operation at this time, try again later (delete statefulsets.apps pet)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #38439

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 10 23:55:29.322: Couldn't delete ns: "e2e-tests-secret-namespace-c4rwh": the server cannot complete the requested operation at this time, try again later (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get statefulsets.apps)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc420a5ed20), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #37525

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:106
Expected error:
    <*errors.StatusError | 0xc420968100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server cannot complete the requested operation at this time, try again later (get statefulsets.apps)",
            Reason: "ServerTimeout",
            Details: {
                Name: "",
                Group: "apps",
                Kind: "statefulsets",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The  operation against  could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 504,
        },
    }
    the server cannot complete the requested operation at this time, try again later (get statefulsets.apps)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 10 23:55:20.124: Couldn't delete ns: "e2e-tests-deployment-sr2bn": the server cannot complete the requested operation at this time, try again later (delete namespaces e2e-tests-deployment-sr2bn) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (delete namespaces e2e-tests-deployment-sr2bn)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc4209080a0), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #29828

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Creating pod "pod-0" in namespace "e2e-tests-disruption-7h44z"
Expected error:
    <*errors.StatusError | 0xc421276400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server cannot complete the requested operation at this time, try again later (post pods)",
            Reason: "ServerTimeout",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The  operation against  could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 504,
        },
    }
    the server cannot complete the requested operation at this time, try again later (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:217

Issues about this test specifically: #32639

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 10 23:55:23.867: Couldn't delete ns: "e2e-tests-pod-network-test-mv13m": the server cannot complete the requested operation at this time, try again later (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get statefulsets.apps)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc420ddc140), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32830

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 10 23:55:29.089: Couldn't delete ns: "e2e-tests-v1job-t7910": the server cannot complete the requested operation at this time, try again later (delete namespaces e2e-tests-v1job-t7910) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (delete namespaces e2e-tests-v1job-t7910)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc42105abe0), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 10 23:55:39.644: Couldn't delete ns: "e2e-tests-job-8njdf": the server cannot complete the requested operation at this time, try again later (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get statefulsets.apps)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc420ad4050), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 10 23:55:24.756: Couldn't delete ns: "e2e-tests-init-container-f3qjz": the server cannot complete the requested operation at this time, try again later (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get statefulsets.apps)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc4204bdd60), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #31873

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 10 23:55:19.411: Couldn't delete ns: "e2e-tests-kubectl-j3mj9": the server cannot complete the requested operation at this time, try again later (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get statefulsets.apps)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc420b6a0f0), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #27976 #29503

@davidopp davidopp added the sig/testing Categorizes an issue or PR as relevant to SIG Testing. label Mar 13, 2017
@fejta fejta added area/provider/gcp Issues or PRs related to gcp provider team/gke and removed sig/testing Categorizes an issue or PR as relevant to SIG Testing. team/test-infra labels Mar 13, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/7933/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc42044fca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33631 #33995 #34970

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc420a06c40>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Mar 14 01:05:19.213: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc42043ae00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

@ethernetdan
Copy link
Contributor

Suite seems stable moving to 1.7

@ethernetdan ethernetdan modified the milestones: v1.7, v1.6 Mar 14, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/7991/
Multiple broken tests:

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
Expected error:
    <*errors.errorString | 0xc420c73750>: {
        s: "failed to get logs from pod-host-path-test for test-container-1: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-host-path-test)",
    }
    failed to get logs from pod-host-path-test for test-container-1: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-host-path-test)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:564
Mar 15 10:18:57.885: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:272
0 (0; 29.504417ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 30.341704ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 74.147049ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:80/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:80/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 147.166452ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 148.758075ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 149.235208ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/https:proxy-service-6rtmq-94k4h:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:460/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 150.373136ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 151.39796ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 152.235337ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 153.839485ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 154.726603ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 155.004049ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 156.257601ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 155.986079ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 155.997689ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/https:proxy-service-6rtmq-94k4h:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:443/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 156.187741ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:460/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 156.437045ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 156.694528ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:462/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 156.86455ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 156.516872ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 156.561849ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/https:proxy-service-6rtmq-94k4h:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:462/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 156.610878ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 157.069234ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:462/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 158.615936ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:460/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 159.656106ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 165.099332ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 166.607157ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:462/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 166.937249ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 279.96511ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 284.109997ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:460/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 285.464456ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 286.369996ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 288.279159ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 329.572812ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 3.834197ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/https:proxy-service-6rtmq-94k4h:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:460/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 18.202289ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 19.405441ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:460/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 21.126658ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 21.612ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 22.224768ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 22.765558ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 24.293337ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:1080/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 24.74297ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 27.405349ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:80/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:80/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 27.395627ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:460/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 27.507013ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/https:proxy-service-6rtmq-94k4h:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 28.086567ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 28.58284ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 29.000755ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 29.58432ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:1080/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 30.037871ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 32.645563ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:1080/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 33.212846ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 33.306793ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 34.542745ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 35.506518ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 36.048779ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 36.197769ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 37.839649ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 38.30359ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 39.201172ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:1080/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 39.450107ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 40.135523ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 40.540457ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/https:proxy-service-6rtmq-94k4h:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:443/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 41.113373ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 41.522711ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:460/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 43.101264ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 43.664301ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 8.357023ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 8.968464ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:460/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 10.450589ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/https:proxy-service-6rtmq-94k4h:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:460/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 11.081796ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 11.822539ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 13.507067ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/proxy-service-6rtmq:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 14.246859ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:1080/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 14.574379ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 15.525286ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:162/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 30.978757ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 31.281411ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:460/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 31.79155ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/pods/proxy-service-6rtmq-94k4h:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 33.248541ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 33.819004ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/pods/http:proxy-service-6rtmq-94k4h:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 34.214324ms): path /api/v1/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'https://10.156.1.25:462/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 34.425478ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/http:proxy-service-6rtmq:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'http://10.156.1.25:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-6c39c017689de7b5b950"?'
Trying to reach: 'http://10.156.1.25:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 36.182155ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-kr9ft/services/https:proxy-service-6rtmq:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-6c39c017689de7b5b950\"?'\nTrying to reach: 'https://10.156.1.25:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'No SSH tunnels currently open. Were the targets

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/8322/
Multiple broken tests:

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc4203c3460>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc420352580>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:65
Expected error:
    <*errors.errorString | 0xc42043bb30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42037ec80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #32375

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/8327/
Multiple broken tests:

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Expected error:
    <*errors.errorString | 0xc420406d90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #32087

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc4203d1f50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc420413fa0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:272
Expected error:
    <*errors.errorString | 0xc420d37bd0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:148

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 23 11:33:47.251: Couldn't delete ns: "e2e-tests-disruption-31jzt": namespace e2e-tests-disruption-31jzt was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-disruption-31jzt was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 23 11:32:04.516: Couldn't delete ns: "e2e-tests-kubectl-3kl11": namespace e2e-tests-kubectl-3kl11 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-3kl11 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Expected error:
    <*errors.errorString | 0xc420ff4070>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1579

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203e7200>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc420430510>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/8526/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203fb490>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420738cb0>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:54
wait for pod "client-containers-a313ec4d-12e8-11e7-967a-0242ac110008" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420370bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #29994

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/8651/
Multiple broken tests:

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc420404300>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420eb6000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc420c38140>: {
        s: "expected pod \"pod-9908e4fa-14f4-11e7-9274-0242ac110007\" success: gave up waiting for pod 'pod-9908e4fa-14f4-11e7-9274-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-9908e4fa-14f4-11e7-9274-0242ac110007" success: gave up waiting for pod 'pod-9908e4fa-14f4-11e7-9274-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc4206b4800>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:310
Expected error:
    <*errors.errorString | 0xc4203a1490>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #35793

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc420f45950>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-29 19:57:21 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-29 20:01:44 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-29 19:57:21 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.4 PodIP:10.156.2.10 StartTime:2017-03-29 19:57:21 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420f40cb0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://cd375aa91170860765b59177cdc6d056e472ae3c5d0b800213d26eb5eec5cb17}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-29 19:57:21 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-29 20:01:44 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-29 19:57:21 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.4 PodIP:10.156.2.10 StartTime:2017-03-29 19:57:21 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc420f40cb0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://cd375aa91170860765b59177cdc6d056e472ae3c5d0b800213d26eb5eec5cb17}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/8781/
Multiple broken tests:

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  1 07:40:45.289: Couldn't delete ns: "e2e-tests-disruption-xz91t": namespace e2e-tests-disruption-xz91t was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-xz91t was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42043ad60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:101
wait for pod "pod-79bcacc4-16e8-11e7-8b78-0242ac11000a" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420388cd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #37439

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:310
Expected error:
    <*errors.errorString | 0xc4203ad7d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #35793

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Apr  1 07:40:19.799: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1995

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc420ae3460>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:25, Replicas:3, UpdatedReplicas:3, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626654093, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626654067, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:\"NewReplicaSetAvailable\", Message:\"Replica set \\\"nginx-135970216\\\" has successfully progressed.\"}, extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626654106, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626654106, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:25, Replicas:3, UpdatedReplicas:3, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626654093, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626654067, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:"NewReplicaSetAvailable", Message:"Replica set \"nginx-135970216\" has successfully progressed."}, extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626654106, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626654106, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:497
Expected error:
    <*errors.errorString | 0xc4203d1640>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #37056

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1087
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.174.194 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-2xvhz] []  <nil> Created e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939\nScaling up e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939 up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc4212b27b0 exit status 1 <nil> <nil> true [0xc4202a4b80 0xc4202a4b98 0xc4202a4bb0] [0xc4202a4b80 0xc4202a4b98 0xc4202a4bb0] [0xc4202a4b90 0xc4202a4ba8] [0x9747f0 0x9747f0] 0xc4211ea780 <nil>}:\nCommand stdout:\nCreated e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939\nScaling up e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939 up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.174.194 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-2xvhz] []  <nil> Created e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939
    Scaling up e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939 up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc4212b27b0 exit status 1 <nil> <nil> true [0xc4202a4b80 0xc4202a4b98 0xc4202a4bb0] [0xc4202a4b80 0xc4202a4b98 0xc4202a4bb0] [0xc4202a4b90 0xc4202a4ba8] [0x9747f0 0x9747f0] 0xc4211ea780 <nil>}:
    Command stdout:
    Created e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939
    Scaling up e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-14cb83ee4231aff55251d039a9c2c939 up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:169

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
wait for pod "pod-configmaps-4fc45f55-16e8-11e7-aa1b-0242ac11000a" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420412100>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #27245

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
wait for pod "pod-567baaec-16e8-11e7-a6f8-0242ac11000a" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420320cd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #31400

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:66
Expected error:
    <*errors.errorString | 0xc4203bd4a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:283

Issues about this test specifically: #37144

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc42043d240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420404600>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32375

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/8824/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203ab510>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:497
failed to execute command in pod gluster-client, container gluster-client: Internal error occurred: error executing command in container: container not found ("gluster-client")
Expected error:
    <*errors.errorString | 0xc420c5ab70>: {
        s: "Internal error occurred: error executing command in container: container not found (\"gluster-client\")",
    }
    Internal error occurred: error executing command in container: container not found ("gluster-client")
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:105

Issues about this test specifically: #37056

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.StatusError | 0xc4212d0d80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.156.1.143:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'\") has prevented the request from succeeding (post services rc-light-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rc-light-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.156.1.143:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.156.1.143:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #27196 #28998 #32403 #33341

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/9060/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420dccbd0>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42042efc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:65
Expected error:
    <*errors.errorString | 0xc420412c00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #29519 #32451

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/9077/
Multiple broken tests:

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc4203c15c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420ed8bc0>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:393
Expected error:
    <*errors.errorString | 0xc420459f20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36970

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203a9890>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/9233/
Multiple broken tests:

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Apr  9 23:50:50.822: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2649

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420d8c880>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:395

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42038bcf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/9249/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42044d980>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc42104aa90>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc420a00500>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:395

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 10 07:36:04.359: Couldn't delete ns: "e2e-tests-kubectl-rc0sd": namespace e2e-tests-kubectl-rc0sd was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-kubectl-rc0sd was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #29834 #35757

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/9305/
Multiple broken tests:

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:64
wait for pod "pod-configmaps-8f5b1966-1edc-11e7-b0d2-0242ac110004" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203e4710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #35790

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 10:36:59.916: Couldn't delete ns: "e2e-tests-disruption-fp373": namespace e2e-tests-disruption-fp373 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-fp373 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc420e16990>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:25, Replicas:3, UpdatedReplicas:3, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627528656, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627528630, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:\"NewReplicaSetAvailable\", Message:\"Replica set \\\"nginx-135970216\\\" has successfully progressed.\"}, extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627528666, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627528666, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:25, Replicas:3, UpdatedReplicas:3, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627528656, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627528630, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:"NewReplicaSetAvailable", Message:"Replica set \"nginx-135970216\" has successfully progressed."}, extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627528666, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627528666, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203fc5e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc4203d2e60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc4203d1ea0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:437
Expected error:
    <*errors.errorString | 0xc4203d2840>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #33985

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/9310/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42039ecf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Apr 11 13:07:50.091: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2649

Issues about this test specifically: #28064 #28569 #34036

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 13:10:48.274: Couldn't delete ns: "e2e-tests-kubectl-014kz": namespace e2e-tests-kubectl-014kz was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-kubectl-014kz was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc420b047d0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:346

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/9441/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203ecd10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc4205d6950>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:395

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:74
Expected error:
    <*errors.errorString | 0xc420fac370>: {
        s: "want pod 'test-webserver-4377afc7-20fb-11e7-927f-0242ac110007' on 'gke-bootstrap-e2e-default-pool-b6833a2f-ndcp' to be 'Running' but was 'Pending'",
    }
    want pod 'test-webserver-4377afc7-20fb-11e7-927f-0242ac110007' on 'gke-bootstrap-e2e-default-pool-b6833a2f-ndcp' to be 'Running' but was 'Pending'
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:56

Issues about this test specifically: #29521

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/9731/
Multiple broken tests:

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc4207198f0>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-19 12:45:48 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-19 12:46:19 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-19 12:45:48 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.156.2.92 StartTime:2017-04-19 12:45:48 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc421353a40} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://d5c87ffd10607c3086f8fb4d7c5bff4010da4bc9a706226891a91fcad8f72345}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-19 12:45:48 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-19 12:46:19 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-19 12:45:48 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.156.2.92 StartTime:2017-04-19 12:45:48 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc421353a40} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://d5c87ffd10607c3086f8fb4d7c5bff4010da4bc9a706226891a91fcad8f72345}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc42034ccb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203c30f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Apr 19 12:59:46.867: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1582

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Apr 19 12:52:43.449: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:629
Apr 19 12:43:09.045: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:626

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc42043ef00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203aac20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Apr 19 12:58:43.740: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4204425b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/10126/
Multiple broken tests:

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420a5b000>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 15, less than the min required: 18",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 15, less than the min required: 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203ac730>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc42039e8f0>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:598

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
waiting for tester pod to start
Expected error:
    <*errors.errorString | 0xc420414cd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:110

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc42038ce10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc4209a6660>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:395

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc420450d60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-parallel/10307/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 03:09:40.571: Couldn't delete ns: "e2e-tests-emptydir-220n2": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-emptydir-220n2/statefulsets\"") has prevented the request from succeeding (get statefulsets.apps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-emptydir-220n2/statefulsets\\\"\") has prevented the request from succeeding (get statefulsets.apps)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420a12280), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #37500

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.StatusError | 0xc420760380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-deployment-0p9kd/pods/nginx-3837372172-29sld\\\"\") has prevented the request from succeeding (delete pods nginx-3837372172-29sld)",
            Reason: "InternalError",
            Details: {
                Name: "nginx-3837372172-29sld",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-deployment-0p9kd/pods/nginx-3837372172-29sld\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-deployment-0p9kd/pods/nginx-3837372172-29sld\"") has prevented the request from succeeding (delete pods nginx-3837372172-29sld)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1446

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 03:09:40.567: Couldn't delete ns: "e2e-tests-ssh-pdn1x": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-ssh-pdn1x\"") has prevented the request from succeeding (delete namespaces e2e-tests-ssh-pdn1x) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-ssh-pdn1x\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-ssh-pdn1x)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc420bedef0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #26129 #32341

@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 52 days. It will be closed in 37 days (Jun 12, 2017).

cc @apelisse @k8s-merge-robot

You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days

@spiffxp
Copy link
Member

spiffxp commented May 31, 2017

/sig testing
/assign

I'm going to close this given how inactive it's been

@k8s-ci-robot k8s-ci-robot added the sig/testing Categorizes an issue or PR as relevant to SIG Testing. label May 31, 2017
@spiffxp
Copy link
Member

spiffxp commented May 31, 2017

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/gcp Issues or PRs related to gcp provider kind/flake Categorizes issue or PR as related to a flaky test. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
None yet
Development

No branches or pull requests

9 participants