Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-prod-smoke: broken test run #37951

Closed
k8s-github-robot opened this issue Dec 2, 2016 · 34 comments
Closed

ci-kubernetes-e2e-gci-gke-prod-smoke: broken test run #37951

k8s-github-robot opened this issue Dec 2, 2016 · 34 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/network Categorizes an issue or PR as relevant to SIG Network.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/1230/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:104
Expected error:
    <*errors.errorString | 0xc820176ba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32830

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:118
Expected error:
    <*errors.errorString | 0xc820174ba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Expected error:
    <*errors.errorString | 0xc8200db7c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:97
Expected error:
    <*errors.errorString | 0xc820196a30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32375

Previous issues for this suite: #37816

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 2, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/1583/

Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc820953cf0>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-44394f07-bb68-11e6-8f0a-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-44394f07-bb68-11e6-8f0a-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #31400

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
Expected error:
    <*errors.errorString | 0xc820ba2c60>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-441d65f1-bb68-11e6-993f-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-441d65f1-bb68-11e6-993f-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #33987

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
waiting for server pod to start
Expected error:
    <*errors.errorString | 0xc820176ba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:65

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1019
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.44.53 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-5v1as] []  <nil> Created e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7\nScaling up e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7 up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc8208c2ce0 exit status 1 <nil> true [0xc820038958 0xc820038970 0xc820038988] [0xc820038958 0xc820038970 0xc820038988] [0xc820038968 0xc820038980] [0xafa7f0 0xafa7f0] 0xc820556cc0}:\nCommand stdout:\nCreated e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7\nScaling up e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7 up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.44.53 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-5v1as] []  <nil> Created e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7
    Scaling up e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7 up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc8208c2ce0 exit status 1 <nil> true [0xc820038958 0xc820038970 0xc820038988] [0xc820038958 0xc820038970 0xc820038988] [0xc820038968 0xc820038980] [0xafa7f0 0xafa7f0] 0xc820556cc0}:
    Command stdout:
    Created e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7
    Scaling up e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-d0f144d40d210d9ed45ee75cd7be15e7 up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:167

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:147
starting pod liveness-exec in namespace e2e-tests-container-probe-7wvfb
Expected error:
    <*errors.errorString | 0xc8201c2760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:334

Issues about this test specifically: #37914

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:40
Expected error:
    <*errors.errorString | 0xc8200e77c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:109

Issues about this test specifically: #30981

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:118
Expected error:
    <*errors.errorString | 0xc8200d97c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Dec  5 20:03:41.163: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2125

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:97
Expected error:
    <*errors.errorString | 0xc82000fc10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32375

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/1621/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:97
Expected error:
    <*errors.errorString | 0xc8200f57b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32375

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Dec  6 05:29:26.964: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2125

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1212
Expected error:
    <*errors.errorString | 0xc820b10180>: {
        s: "timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.22.72 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-dndxp run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820afd560 Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\n  [] <nil> 0xc820afdf40 <nil> <nil> true [0xc8200380e0 0xc820038240 0xc8200382d0] [0xc8200380e0 0xc820038240 0xc8200382d0] [0xc820038120 0xc820038208 0xc820038278] [0xafa690 0xafa7f0 0xafa7f0] 0xc8205e92c0}:\nCommand stdout:\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false\n\nstderr:\n\n",
    }
    timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.22.72 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-dndxp run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820afd560 Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
      [] <nil> 0xc820afdf40 <nil> <nil> true [0xc8200380e0 0xc820038240 0xc8200382d0] [0xc8200380e0 0xc820038240 0xc8200382d0] [0xc820038120 0xc820038208 0xc820038278] [0xafa690 0xafa7f0 0xafa7f0] 0xc8205e92c0}:
    Command stdout:
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-dndxp/e2e-test-rm-busybox-job-m3tyy to be running, status is Pending, pod ready:

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/2190/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Expected error:
    <*errors.errorString | 0xc820176ba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Dec 12 15:40:18.189: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2125

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:344
Expected error:
    <*errors.errorString | 0xc820cadd70>: {
        s: "expected container secret-env-test success: gave up waiting for pod 'pod-secrets-b87b1849-c0c3-11e6-bb92-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected container secret-env-test success: gave up waiting for pod 'pod-secrets-b87b1849-c0c3-11e6-bb92-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:280
Expected error:
    <*errors.errorString | 0xc820efca30>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:156

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:97
Expected error:
    <*errors.errorString | 0xc820196ba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32375

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:118
Expected error:
    <*errors.errorString | 0xc8201aa760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:50
Expected error:
    <*errors.errorString | 0xc8208cced0>: {
        s: "gave up waiting for pod 'wget-test' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'wget-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49

Issues about this test specifically: #26171 #28188

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/2221/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.57.180 --kubeconfig=/workspace/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-ttzbi] []  <nil>  Unable to connect to the server: dial tcp 104.196.57.180:443: i/o timeout\n [] <nil> 0xc820a828c0 exit status 1 <nil> true [0xc8200c3d98 0xc8200c3db0 0xc8200c3dc8] [0xc8200c3d98 0xc8200c3db0 0xc8200c3dc8] [0xc8200c3da8 0xc8200c3dc0] [0xafa830 0xafa830] 0xc820e5c2a0}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.196.57.180:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.57.180 --kubeconfig=/workspace/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-ttzbi] []  <nil>  Unable to connect to the server: dial tcp 104.196.57.180:443: i/o timeout
     [] <nil> 0xc820a828c0 exit status 1 <nil> true [0xc8200c3d98 0xc8200c3db0 0xc8200c3dc8] [0xc8200c3d98 0xc8200c3db0 0xc8200c3dc8] [0xc8200c3da8 0xc8200c3dc0] [0xafa830 0xafa830] 0xc820e5c2a0}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 104.196.57.180:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc82088be00>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-port-forwarding-tldwo/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-port-forwarding-tldwo/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #27680 #38211

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820e79350>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-downward-api-ubv37/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-downward-api-ubv37/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #31836

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:875
Dec 13 01:49:44.292: Verified 0 of 1 pods , error : Get https://104.196.57.180/api/v1/namespaces/e2e-tests-kubectl-7uxyl/pods?labelSelector=app%3Dredis: dial tcp 104.196.57.180:443: i/o timeout
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:202

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820a4ec90>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-proxy-r98em/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-proxy-r98em/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #35601

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:280
Expected error:
    <*url.Error | 0xc8208103f0>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/namespaces/e2e-tests-proxy-2ymib/endpoints",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/namespaces/e2e-tests-proxy-2ymib/endpoints: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:159

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820969170>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-pods-ybq79/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-pods-ybq79/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #33008

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc82056b3e0>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-services-bbl94/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-services-bbl94/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc82095b890>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-nettest-6q3o0/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-nettest-6q3o0/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820ce71a0>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-configmap-s2kzy/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-configmap-s2kzy/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #27079

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820af6210>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-emptydir-xvudf/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-emptydir-xvudf/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:40
Expected error:
    <*url.Error | 0xc820b6bce0>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/namespaces/e2e-tests-replicaset-tbo3y/pods?labelSelector=name%3Dmy-hostname-basic-94f45804-c118-11e6-a1f1-0242ac110003",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/namespaces/e2e-tests-replicaset-tbo3y/pods?labelSelector=name%3Dmy-hostname-basic-94f45804-c118-11e6-a1f1-0242ac110003: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1704

Issues about this test specifically: #30981

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:319
Expected error:
    <*url.Error | 0xc820a351a0>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/namespaces/e2e-tests-pods-iwzuo/pods/pod-update-97d42bc0-c118-11e6-98c0-0242ac110003",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/namespaces/e2e-tests-pods-iwzuo/pods/pod-update-97d42bc0-c118-11e6-98c0-0242ac110003: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:60

Issues about this test specifically: #35793

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820290c60>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-containers-r1sal/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-containers-r1sal/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #34520

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8208d51a0>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-pods-vwdgi/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/watch/namespaces/e2e-tests-pods-vwdgi/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #36564

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:266
failed to GET scheduled pod
Expected error:
    <*url.Error | 0xc820426000>: {
        Op: "Get",
        URL: "https://104.196.57.180/api/v1/namespaces/e2e-tests-pods-b1zzm/pods/pod-submit-remove-94f45dad-c118-11e6-93c1-0242ac110003",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc49\xb4",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.57.180/api/v1/namespaces/e2e-tests-pods-b1zzm/pods/pod-submit-remove-94f45dad-c118-11e6-93c1-0242ac110003: dial tcp 104.196.57.180:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:210

Issues about this test specifically: #26224 #34354

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/2238/

Multiple broken tests:

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:114
Tail (99 percentile) latency should be less than 50s
50, 90, 99 percentiles: 6.019795996s 18.168062902s 1m20.290671304s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:112

Issues about this test specifically: #30632

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.189.233 --kubeconfig=/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tupor] []  <nil>  Unable to connect to the server: dial tcp 104.196.189.233:443: i/o timeout\n [] <nil> 0xc820e77a20 exit status 1 <nil> true [0xc820c501e8 0xc820c50210 0xc820c50228] [0xc820c501e8 0xc820c50210 0xc820c50228] [0xc820c50200 0xc820c50220] [0xafa830 0xafa830] 0xc8208a8d80}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.196.189.233:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.189.233 --kubeconfig=/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tupor] []  <nil>  Unable to connect to the server: dial tcp 104.196.189.233:443: i/o timeout
     [] <nil> 0xc820e77a20 exit status 1 <nil> true [0xc820c501e8 0xc820c50210 0xc820c50228] [0xc820c501e8 0xc820c50210 0xc820c50228] [0xc820c50200 0xc820c50220] [0xafa830 0xafa830] 0xc8208a8d80}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 104.196.189.233:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820a9e480>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-proxy-r9sa0/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-proxy-r9sa0/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #35601

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <*errors.errorString | 0xc820c4d1a0>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1510

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820aa3bf0>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-container-probe-6i411/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-container-probe-6i411/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #38511

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
validating pre-stop.
Expected error:
    <*errors.errorString | 0xc82016aba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:159

Issues about this test specifically: #30287 #35953

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: list nodes {e2e.go}

exit status 1

Issues about this test specifically: #38667

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8208fac00>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-emptydir-npqpp/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-emptydir-npqpp/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #26780

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820bf9590>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-kubectl-v253d/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-kubectl-v253d/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109
Expected error:
    <*errors.errorString | 0xc820ee2de0>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-5bebee8c-c14c-11e6-9d99-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-5bebee8c-c14c-11e6-9d99-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #37071

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820840750>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-proxy-pmd6j/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-proxy-pmd6j/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #32089

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93
Expected error:
    <*errors.errorString | 0xc820cb9010>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-8865b5f4-c14c-11e6-99fe-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-8865b5f4-c14c-11e6-99fe-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.189.233 --kubeconfig=/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ep5nm] []  <nil>  Unable to connect to the server: dial tcp 104.196.189.233:443: i/o timeout\n [] <nil> 0xc82058e800 exit status 1 <nil> true [0xc820182008 0xc820182f48 0xc820182f80] [0xc820182008 0xc820182f48 0xc820182f80] [0xc820182f28 0xc820182f68] [0xafa830 0xafa830] 0xc82061e420}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.196.189.233:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.189.233 --kubeconfig=/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ep5nm] []  <nil>  Unable to connect to the server: dial tcp 104.196.189.233:443: i/o timeout
     [] <nil> 0xc82058e800 exit status 1 <nil> true [0xc820182008 0xc820182f48 0xc820182f80] [0xc820182008 0xc820182f48 0xc820182f80] [0xc820182f28 0xc820182f68] [0xafa830 0xafa830] 0xc82061e420}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 104.196.189.233:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820b44000>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-secrets-4ju14/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-secrets-4ju14/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820e63bc0>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-nettest-0zh3u/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-nettest-0zh3u/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #32830

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820614720>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-configmap-p6ik3/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-configmap-p6ik3/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #27245

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820b8c000>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-services-jji0s/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-services-jji0s/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820ab4630>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-container-probe-uibbe/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-container-probe-uibbe/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #29521

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:319
Expected error:
    <*url.Error | 0xc8207fcdb0>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/namespaces/e2e-tests-pods-odmqg/pods/pod-update-859883ad-c14a-11e6-854b-0242ac110009",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/namespaces/e2e-tests-pods-odmqg/pods/pod-update-859883ad-c14a-11e6-854b-0242ac110009: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:60

Issues about this test specifically: #35793

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:452
Expected error:
    <*errors.errorString | 0xc8200ef7c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Issues about this test specifically: #33985

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820e392f0>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-downward-api-ldd8i/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-downward-api-ldd8i/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8205f9710>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-kubectl-wp2dm/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-kubectl-wp2dm/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:266
failed to GET scheduled pod
Expected error:
    <*url.Error | 0xc8205e55c0>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/namespaces/e2e-tests-pods-9qiqi/pods/pod-submit-remove-85fd7d2d-c14a-11e6-9a9e-0242ac110009",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/namespaces/e2e-tests-pods-9qiqi/pods/pod-submit-remove-85fd7d2d-c14a-11e6-9a9e-0242ac110009: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:210

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8202529f0>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-configmap-6sao8/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-configmap-6sao8/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:280
Expected error:
    <*errors.errorString | 0xc8213636c0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:156

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:145
Dec 13 07:47:57.244: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-d14ro to expose endpoints map[pod1:[80] pod2:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1353

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Dec 13 07:48:59.496: Failed to find expected endpoints:
Tries 0
Command curl -q -s --connect-timeout 1 http://10.48.1.17:8080/hostName
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:228

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820b7e930>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-downward-api-s9vwb/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-downward-api-s9vwb/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #31836

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820bb4120>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-configmap-zg2sr/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-configmap-zg2sr/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #29052

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:666
Dec 13 07:53:20.292: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:202

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc82055a240>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-configmap-igz40/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-configmap-igz40/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #34827

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820bb4930>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-container-probe-9aypu/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-container-probe-9aypu/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #30264

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc8205c4c80>: {
        s: "expected container test-container success: pod 'pod-09c631e6-c14b-11e6-99fe-0242ac110009' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-13 07:45:05 -0800 PST FinishedAt:2016-12-13 07:45:05 -0800 PST ContainerID:docker://1c90cd8a6712963a7654079e7c6f0421f4a67579b83c4956af9405a8f64c8619}",
    }
    expected container test-container success: pod 'pod-09c631e6-c14b-11e6-99fe-0242ac110009' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-13 07:45:05 -0800 PST FinishedAt:2016-12-13 07:45:05 -0800 PST ContainerID:docker://1c90cd8a6712963a7654079e7c6f0421f4a67579b83c4956af9405a8f64c8619}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #30851

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820b0a150>: {
        Op: "Get",
        URL: "https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-kubectl-3kb5c/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhĽ\xe9",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.189.233/api/v1/watch/namespaces/e2e-tests-kubectl-3kb5c/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.189.233:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #28420 #36122

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/2249/

Multiple broken tests:

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8205ea5a0>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-pods-z8r2y/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-pods-z8r2y/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #33985

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820b92ba0>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-emptydir-kfg5q/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-emptydir-kfg5q/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #30851

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820a15050>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-proxy-sc8ux/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-proxy-sc8ux/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:220
Dec 13 12:16:54.307: Couldn't create pod: Post https://104.196.145.144/api/v1/namespaces/e2e-tests-port-forwarding-5ln2z/pods: dial tcp 104.196.145.144:443: i/o timeout
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:181

Issues about this test specifically: #26955

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:145
Dec 13 12:18:36.472: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-2m99q to expose endpoints map[pod1:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1353

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:114
Tail (99 percentile) latency should be less than 50s
50, 90, 99 percentiles: 2.89812123s 8.621660343s 1m51.159507096s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:112

Issues about this test specifically: #30632

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.errorString | 0xc820174ba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #29710

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8208386c0>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-emptydir-jwc93/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-emptydir-jwc93/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #33987

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820cb5260>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-configmap-77ah3/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-configmap-77ah3/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #35790

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:149
Dec 13 12:16:33.232: unable to create test secret : Post https://104.196.145.144/api/v1/namespaces/e2e-tests-secrets-trhc3/secrets: dial tcp 104.196.145.144:443: i/o timeout
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:105

Issues about this test specifically: #35256

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820c5c8a0>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-container-probe-zgqpt/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-container-probe-zgqpt/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #38511

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1212
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.145.144 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-owfpl run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820effbe0  Error from server: the server cannot complete the requested operation at this time, try again later\n [] <nil> 0xc8202204a0 exit status 1 <nil> true [0xc8204ba238 0xc8204ba3c8 0xc8204ba4a8] [0xc8204ba238 0xc8204ba3c8 0xc8204ba4a8] [0xc8204ba250 0xc8204ba310 0xc8204ba4a0] [0xafa6d0 0xafa830 0xafa830] 0xc820b9ce40}:\nCommand stdout:\n\nstderr:\nError from server: the server cannot complete the requested operation at this time, try again later\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.145.144 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-owfpl run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820effbe0  Error from server: the server cannot complete the requested operation at this time, try again later
     [] <nil> 0xc8202204a0 exit status 1 <nil> true [0xc8204ba238 0xc8204ba3c8 0xc8204ba4a8] [0xc8204ba238 0xc8204ba3c8 0xc8204ba4a8] [0xc8204ba250 0xc8204ba310 0xc8204ba4a0] [0xafa6d0 0xafa830 0xafa830] 0xc820b9ce40}:
    Command stdout:
    
    stderr:
    Error from server: the server cannot complete the requested operation at this time, try again later
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820be85d0>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-container-probe-em6ge/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-container-probe-em6ge/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] HostPath should support subPath [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 13 12:25:42.395: Couldn't delete ns: "e2e-tests-hostpath-58yfk": Get https://104.196.145.144/api: dial tcp 104.196.145.144:443: i/o timeout (&url.Error{Op:"Get", URL:"https://104.196.145.144/api", Err:(*net.OpError)(0xc820bb2000)})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #35628

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Dec 13 12:24:28.214: Failed to get pod : Get https://104.196.145.144/api/v1/namespaces/e2e-tests-dns-umiiv/pods/dns-test-0c1f6443-c172-11e6-9e4b-0242ac110003: dial tcp 104.196.145.144:443: i/o timeout
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:241

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820d5a000>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-kubectl-7stto/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-kubectl-7stto/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820c3cb10>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-emptydir-98ytc/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-emptydir-98ytc/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #26780

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820ab7170>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-kubectl-z3325/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-kubectl-z3325/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820786c00>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-downward-api-srz9c/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-downward-api-srz9c/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820600030>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-pods-rq7u2/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-pods-rq7u2/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #33008

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 13 12:27:13.724: Couldn't delete ns: "e2e-tests-nettest-2bhc2": Get https://104.196.145.144/api/v1/namespaces/e2e-tests-nettest-2bhc2/secrets: dial tcp 104.196.145.144:443: getsockopt: connection refused (&url.Error{Op:"Get", URL:"https://104.196.145.144/api/v1/namespaces/e2e-tests-nettest-2bhc2/secrets", Err:(*net.OpError)(0xc820b09220)})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32830

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8205db470>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-services-7wvn4/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-services-7wvn4/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:64
Error creating Pod
Expected error:
    <*url.Error | 0xc82054a8a0>: {
        Op: "Post",
        URL: "https://104.196.145.144/api/v1/namespaces/e2e-tests-containers-74xop/pods",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Post https://104.196.145.144/api/v1/namespaces/e2e-tests-containers-74xop/pods: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Issues about this test specifically: #29467

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc82058db30>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-emptydir-3ryrj/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-emptydir-3ryrj/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820c3fb60>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-containers-h538c/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-containers-h538c/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #36706

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820b18180>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-kubectl-r8mxu/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-kubectl-r8mxu/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 13 12:27:11.251: All nodes should be ready after test, Get https://104.196.145.144/api/v1/nodes: dial tcp 104.196.145.144:443: getsockopt: connection refused
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:418

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.145.144 --kubeconfig=/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cyq2j] []  <nil>  Unable to connect to the server: dial tcp 104.196.145.144:443: i/o timeout\n [] <nil> 0xc820c371c0 exit status 1 <nil> true [0xc8207e6098 0xc8207e60b0 0xc8207e60c8] [0xc8207e6098 0xc8207e60b0 0xc8207e60c8] [0xc8207e60a8 0xc8207e60c0] [0xafa830 0xafa830] 0xc820ce7740}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.196.145.144:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.145.144 --kubeconfig=/workspace/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cyq2j] []  <nil>  Unable to connect to the server: dial tcp 104.196.145.144:443: i/o timeout
     [] <nil> 0xc820c371c0 exit status 1 <nil> true [0xc8207e6098 0xc8207e60b0 0xc8207e60c8] [0xc8207e6098 0xc8207e60b0 0xc8207e60c8] [0xc8207e60a8 0xc8207e60c0] [0xafa830 0xafa830] 0xc820ce7740}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 104.196.145.144:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:50
Expected error:
    <*url.Error | 0xc82083b500>: {
        Op: "Post",
        URL: "https://104.196.145.144/api/v1/namespaces/e2e-tests-nettest-x6efh/pods",
        Err: {},
    }
    Post https://104.196.145.144/api/v1/namespaces/e2e-tests-nettest-x6efh/pods: net/http: TLS handshake timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Expected error:
    <*url.Error | 0xc820ab2ba0>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/namespaces/e2e-tests-nettest-94tyd/pods/netserver-0",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/namespaces/e2e-tests-nettest-94tyd/pods/netserver-0: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:463

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820ce9110>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-kubectl-fchzr/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-kubectl-fchzr/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820554180>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-kubectl-p3x4d/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-kubectl-p3x4d/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 13 12:25:42.406: Couldn't delete ns: "e2e-tests-emptydir-prp52": Get https://104.196.145.144/api: dial tcp 104.196.145.144:443: i/o timeout (&url.Error{Op:"Get", URL:"https://104.196.145.144/api", Err:(*net.OpError)(0xc82001abe0)})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1263
Dec 13 12:16:52.806: Expected output from kubectl proxy: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1256

Issues about this test specifically: #35473

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820d8ab40>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-replication-controller-acgja/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-replication-controller-acgja/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820bc7aa0>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-svcaccounts-x9ika/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-svcaccounts-x9ika/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #37526

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:344
Dec 13 12:16:50.438: Couldn't create pod: Post https://104.196.145.144/api/v1/namespaces/e2e-tests-port-forwarding-l11bq/pods: dial tcp 104.196.145.144:443: i/o timeout
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:294

Issues about this test specifically: #27673

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:294
Dec 13 12:16:35.296: unable to create test secret : Post https://104.196.145.144/api/v1/namespaces/e2e-tests-secrets-r06ji/secrets: dial tcp 104.196.145.144:443: i/o timeout
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:239

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:757
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.145.144 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-kqgdl] []  0xc820742620  Error from server: the server cannot complete the requested operation at this time, try again later\n [] <nil> 0xc820742e20 exit status 1 <nil> true [0xc8201ac250 0xc8201ac410 0xc8201ac4e8] [0xc8201ac250 0xc8201ac410 0xc8201ac4e8] [0xc8201ac290 0xc8201ac370 0xc8201ac438] [0xafa6d0 0xafa830 0xafa830] 0xc8207bf500}:\nCommand stdout:\n\nstderr:\nError from server: the server cannot complete the requested operation at this time, try again later\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.145.144 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-kqgdl] []  0xc820742620  Error from server: the server cannot complete the requested operation at this time, try again later
     [] <nil> 0xc820742e20 exit status 1 <nil> true [0xc8201ac250 0xc8201ac410 0xc8201ac4e8] [0xc8201ac250 0xc8201ac410 0xc8201ac4e8] [0xc8201ac290 0xc8201ac370 0xc8201ac438] [0xafa6d0 0xafa830 0xafa830] 0xc8207bf500}:
    Command stdout:
    
    stderr:
    Error from server: the server cannot complete the requested operation at this time, try again later
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820858450>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-containers-ft3ny/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-containers-ft3ny/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #34520

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820b29080>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-emptydir-0wr5a/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-emptydir-0wr5a/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #37500

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8205d2090>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-container-probe-lh8r6/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-container-probe-lh8r6/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #29521

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:243
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.145.144 --kubeconfig=/workspace/.kube/config get pods update-demo-nautilus-gen18 -o template --template={{if (exists . \"status\" \"containerStatuses\")}}{{range .status.containerStatuses}}{{if eq .name \"update-demo\"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-y6nws] []  <nil>  The connection to the server 104.196.145.144 was refused - did you specify the right host or port?\n [] <nil> 0xc820e6da40 exit status 1 <nil> true [0xc8204b2538 0xc8204b2558 0xc8204b2588] [0xc8204b2538 0xc8204b2558 0xc8204b2588] [0xc8204b2550 0xc8204b2578] [0xafa830 0xafa830] 0xc82038b800}:\nCommand stdout:\n\nstderr:\nThe connection to the server 104.196.145.144 was refused - did you specify the right host or port?\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.145.144 --kubeconfig=/workspace/.kube/config get pods update-demo-nautilus-gen18 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-y6nws] []  <nil>  The connection to the server 104.196.145.144 was refused - did you specify the right host or port?
     [] <nil> 0xc820e6da40 exit status 1 <nil> true [0xc8204b2538 0xc8204b2558 0xc8204b2588] [0xc8204b2538 0xc8204b2558 0xc8204b2588] [0xc8204b2550 0xc8204b2578] [0xafa830 0xafa830] 0xc82038b800}:
    Command stdout:
    
    stderr:
    The connection to the server 104.196.145.144 was refused - did you specify the right host or port?
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Error creating Pod
Expected error:
    <*url.Error | 0xc820798a80>: {
        Op: "Post",
        URL: "https://104.196.145.144/api/v1/namespaces/e2e-tests-downward-api-744nn/pods",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Post https://104.196.145.144/api/v1/namespaces/e2e-tests-downward-api-744nn/pods: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Issues about this test specifically: #37423

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc82008d3e0>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-configmap-p3qzl/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-configmap-p3qzl/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #32949

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1185
Dec 13 12:16:53.352: Failed to get server version: Unable to get server version: Get https://104.196.145.144/version: dial tcp 104.196.145.144:443: i/o timeout
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:413

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Secrets should be consumable from pods in volume with Mode set in the item [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8207ad650>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-secrets-o7dv7/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-secrets-o7dv7/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #31969

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8201b8f30>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-nettest-iqjiw/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-nettest-iqjiw/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #32375

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:160
Timed out after 314.302s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
	<*url.Error>: &url.Error{Op:"Get", URL:"https://104.196.145.144/api/v1/namespaces/e2e-tests-configmap-rryoc/pods/pod-configmaps-eb52e26b-c171-11e6-8453-0242ac110003/log?container=configmap-volume-test&previous=false", Err:(*net.OpError)(0xc820880050)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:159

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820a0f800>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-nettest-gpfe9/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-nettest-gpfe9/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #26838 #36165

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8208aca50>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-pods-grsll/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-pods-grsll/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #36649

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:69
Expected error:
    <*url.Error | 0xc820f6b620>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/nodes/gke-bootstrap-e2e-default-pool-cb53ee05-1r4d:4194/proxy/containers/",
        Err: {
            Op: "read",
            Net: "tcp",
            Source: {
                IP: "\xac\x11\x00\x03",
                Port: 37603,
                Zone: "",
            },
            Addr: {IP: "hđ\x90", Port: 443, Zone: ""},
            Err: {Syscall: "read", Err: 0x68},
        },
    }
    Get https://104.196.145.144/api/v1/nodes/gke-bootstrap-e2e-default-pool-cb53ee05-1r4d:4194/proxy/containers/: read tcp 172.17.0.3:37603->104.196.145.144:443: read: connection reset by peer
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:332

Issues about this test specifically: #32089

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc8207c1c20>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-container-probe-msjs4/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-container-probe-msjs4/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #30264

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820242f30>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-container-probe-ne08t/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-container-probe-ne08t/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #37914

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc82098c540>: {
        Op: "Get",
        URL: "https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-replicaset-lhy5f/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhđ\x90",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.145.144/api/v1/watch/namespaces/e2e-tests-replicaset-lhy5f/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.145.144:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #30981

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/2252/

Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820258280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-i2r65/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-i2r65/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-i2r65/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:913
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.221.109 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9uczn] []  <nil>  Scaling the resource failed with: Put https://104.196.221.109/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-9uczn/replicasets/e2e-test-nginx-deployment-4272272891: read tcp 172.17.0.2:54360->104.196.221.109:443: read: connection timed out; Current resource version 2372\n [] <nil> 0xc82065a820 exit status 1 <nil> true [0xc8200e64e0 0xc8200e64f8 0xc8200e6510] [0xc8200e64e0 0xc8200e64f8 0xc8200e6510] [0xc8200e64f0 0xc8200e6508] [0xafa830 0xafa830] 0xc820cf5740}:\nCommand stdout:\n\nstderr:\nScaling the resource failed with: Put https://104.196.221.109/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-9uczn/replicasets/e2e-test-nginx-deployment-4272272891: read tcp 172.17.0.2:54360->104.196.221.109:443: read: connection timed out; Current resource version 2372\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.221.109 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9uczn] []  <nil>  Scaling the resource failed with: Put https://104.196.221.109/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-9uczn/replicasets/e2e-test-nginx-deployment-4272272891: read tcp 172.17.0.2:54360->104.196.221.109:443: read: connection timed out; Current resource version 2372
     [] <nil> 0xc82065a820 exit status 1 <nil> true [0xc8200e64e0 0xc8200e64f8 0xc8200e6510] [0xc8200e64e0 0xc8200e64f8 0xc8200e6510] [0xc8200e64f0 0xc8200e6508] [0xafa830 0xafa830] 0xc820cf5740}:
    Command stdout:
    
    stderr:
    Scaling the resource failed with: Put https://104.196.221.109/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-9uczn/replicasets/e2e-test-nginx-deployment-4272272891: read tcp 172.17.0.2:54360->104.196.221.109:443: read: connection timed out; Current resource version 2372
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820d14380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-var-expansion-zmsw6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-zmsw6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-zmsw6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/2296/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:243
Dec 14 04:48:33.748: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2125

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:118
Expected error:
    <*errors.errorString | 0xc82017cb90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:361

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:233
Dec 14 04:44:14.266: pod e2e-tests-container-probe-0e920/liveness-http - expected number of restarts: 0, found restarts: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:373

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:229
Dec 14 04:44:32.876: Timed out waiting for service multi-endpoint-test in namespace e2e-tests-services-vfpxv to expose endpoints map[pod1:[100]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1353

Issues about this test specifically: #29831

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc820d3db50>: {
        s: "expected container test-container success: pod 'pod-6db7cc60-c1fa-11e6-b528-0242ac110003' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 04:42:27 -0800 PST FinishedAt:2016-12-14 04:42:27 -0800 PST ContainerID:docker://99746219990daa1aa2b3afee8b59b374c93470c2ac40b35d7590cdf64c37f32e}",
    }
    expected container test-container success: pod 'pod-6db7cc60-c1fa-11e6-b528-0242ac110003' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 04:42:27 -0800 PST FinishedAt:2016-12-14 04:42:27 -0800 PST ContainerID:docker://99746219990daa1aa2b3afee8b59b374c93470c2ac40b35d7590cdf64c37f32e}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #37500

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:160
Expected error:
    <*errors.errorString | 0xc8201a6760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:54
Expected error:
    <*errors.errorString | 0xc820827e00>: {
        s: "expected container configmap-volume-test success: pod 'pod-configmaps-6e4815e2-c1fa-11e6-b2e4-0242ac110003' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 04:42:17 -0800 PST FinishedAt:2016-12-14 04:42:17 -0800 PST ContainerID:docker://dc75d0ca9e6549366124a9de79607c15cf9b7cf544ef3a483ce7a0df836daf28}",
    }
    expected container configmap-volume-test success: pod 'pod-configmaps-6e4815e2-c1fa-11e6-b2e4-0242ac110003' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 04:42:17 -0800 PST FinishedAt:2016-12-14 04:42:17 -0800 PST ContainerID:docker://dc75d0ca9e6549366124a9de79607c15cf9b7cf544ef3a483ce7a0df836daf28}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #32949

Failed: [k8s.io] Secrets should be consumable from pods in volume with Mode set in the item [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:215
Expected error:
    <*errors.errorString | 0xc82061a600>: {
        s: "expected container secret-volume-test success: pod 'pod-secrets-6e7ba466-c1fa-11e6-8d09-0242ac110003' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 04:42:28 -0800 PST FinishedAt:2016-12-14 04:42:28 -0800 PST ContainerID:docker://7d7862cbf631193cd57012531f11180e93ad859addcfd91bdae9f42644136454}",
    }
    expected container secret-volume-test success: pod 'pod-secrets-6e7ba466-c1fa-11e6-8d09-0242ac110003' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 04:42:28 -0800 PST FinishedAt:2016-12-14 04:42:28 -0800 PST ContainerID:docker://7d7862cbf631193cd57012531f11180e93ad859addcfd91bdae9f42644136454}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #31969

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Dec 14 04:51:41.328: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1518

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:54
Expected error:
    <*errors.errorString | 0xc820954640>: {
        s: "expected container test-container success: pod 'client-containers-6e659328-c1fa-11e6-a211-0242ac110003' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 04:42:33 -0800 PST FinishedAt:2016-12-14 04:42:33 -0800 PST ContainerID:docker://af9ad64aede33024c3dcc2a20dc92f2de16c10af946b7e1bb261328447d30d2a}",
    }
    expected container test-container success: pod 'client-containers-6e659328-c1fa-11e6-a211-0242ac110003' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 04:42:33 -0800 PST FinishedAt:2016-12-14 04:42:33 -0800 PST ContainerID:docker://af9ad64aede33024c3dcc2a20dc92f2de16c10af946b7e1bb261328447d30d2a}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #29994

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
Expected error:
    <*errors.errorString | 0xc820ae6bc0>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-db906111-c1fa-11e6-a329-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-db906111-c1fa-11e6-a329-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:280
9: path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/pods/proxy-service-r2gs9-sdii6:162/ took 31.21567548s > 30s
9: path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:portname2/ took 44.912910159s > 30s
10 (0; 39.050923ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:portname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:portname1" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 47.037239ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:tlsportname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:tlsportname2" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 48.416033ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:portname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:portname1" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 49.603655ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:portname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:portname2" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 49.782467ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:portname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:portname2" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 49.966332ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:tlsportname1" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 50.080229ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:portname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:portname1" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 51.3435ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:portname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:portname2" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 53.298069ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:81/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:81" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 54.611769ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:tlsportname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:tlsportname1" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 54.849924ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:portname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:portname1" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 57.519774ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:80/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:80" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 57.766496ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:444/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:444" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 57.683571ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:443/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:443" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 57.973478ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:81/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:81" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 58.598133ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:portname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:portname2" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 109.446856ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:tlsportname2" Reason:ServiceUnavailable Details:nil Code:503}
10 (0; 113.785454ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:80/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:80" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 49.315218ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:tlsportname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:tlsportname2" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 49.477827ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:portname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:portname2" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 50.065691ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:portname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:portname1" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 54.750349ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:portname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:portname2" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 55.275839ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:tlsportname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:tlsportname1" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 55.662302ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:portname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:portname1" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 57.727201ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:tlsportname1" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 58.975287ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:81/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:81" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 63.912906ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:portname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:portname1" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 65.842987ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:portname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:portname2" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 69.648187ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:81/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:81" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 69.807783ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:443/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:443" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 69.98265ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:80/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:80" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 70.61633ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/http:proxy-service-r2gs9:80/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-r2gs9:80" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 196.604621ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:portname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:portname1" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 209.391214ms): path /api/v1/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:tlsportname2" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 213.071812ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/proxy-service-r2gs9:portname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-r2gs9:portname2" Reason:ServiceUnavailable Details:nil Code:503}
11 (0; 214.842902ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-jbjip/services/https:proxy-service-r2gs9:444/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-r2gs9:444" Reason:ServiceUnavailable Details:nil Code:503}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:278

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/2322/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:97
Expected error:
    <*errors.errorString | 0xc8200e97c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:361

Issues about this test specifically: #32375

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc8207aa160>: {
        s: "expected container test-container success: pod 'pod-b253867c-c245-11e6-8457-0242ac110004' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 13:41:34 -0800 PST FinishedAt:2016-12-14 13:41:34 -0800 PST ContainerID:docker://bd7a8873b1080d28bbbfcc5dba4994489a8120d300d56f6cfbf848ed2391d904}",
    }
    expected container test-container success: pod 'pod-b253867c-c245-11e6-8457-0242ac110004' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 13:41:34 -0800 PST FinishedAt:2016-12-14 13:41:34 -0800 PST ContainerID:docker://bd7a8873b1080d28bbbfcc5dba4994489a8120d300d56f6cfbf848ed2391d904}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #31400

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:220
Dec 14 13:41:22.234: Pod did not start running: pod ran to completion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:184

Issues about this test specifically: #26955

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Expected error:
    <*errors.errorString | 0xc820174ba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:266
Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:255

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:104
Expected error:
    <*errors.errorString | 0xc8200ef7d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32830

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:118
Expected error:
    <*errors.errorString | 0xc8200e57c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:54
Expected error:
    <*errors.errorString | 0xc8209ae6e0>: {
        s: "expected container test-container success: pod 'client-containers-b272bc11-c245-11e6-85b2-0242ac110004' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 13:41:32 -0800 PST FinishedAt:2016-12-14 13:41:32 -0800 PST ContainerID:docker://74d6f17e5deebc3368999b4967704a46b787ec1b6f2d4f272f60025bdbd8704e}",
    }
    expected container test-container success: pod 'client-containers-b272bc11-c245-11e6-85b2-0242ac110004' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 13:41:32 -0800 PST FinishedAt:2016-12-14 13:41:32 -0800 PST ContainerID:docker://74d6f17e5deebc3368999b4967704a46b787ec1b6f2d4f272f60025bdbd8704e}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #29994

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:145
Dec 14 13:42:41.862: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-2gg22 to expose endpoints map[pod1:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1353

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:280
0 (0; 19.004709812s): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/pods/https:proxy-service-g265q-5inmz:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:443/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 10.059565111s): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 10.064101985s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 10.108554614s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:460/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 10.108620247s): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/pods/https:proxy-service-g265q-5inmz:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:443/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 10.179829491s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 10.187860562s): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:460/' }],RetryAfterSeconds:0,} Code:503}
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/pods/http:proxy-service-g265q-5inmz:1080/ took 34.776187503s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/proxy-service-g265q:81/ took 36.540294913s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/proxy-service-g265q:portname2/proxy/ took 36.540243499s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/proxy-service-g265q:portname2/ took 36.540642603s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-4o9n2/pods/https:proxy-service-g265q-5inmz:460/proxy/ took 36.562782425s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/pods/proxy-service-g265q-5inmz:162/ took 36.575222773s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/http:proxy-service-g265q:81/ took 36.580689635s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/pods/http:proxy-service-g265q-5inmz:162/ took 36.580343779s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-4o9n2/pods/http:proxy-service-g265q-5inmz:160/proxy/ took 36.621335311s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-4o9n2/pods/proxy-service-g265q-5inmz:162/proxy/ took 36.621499576s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-4o9n2/pods/https:proxy-service-g265q-5inmz:462/proxy/ took 36.629135045s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:tlsportname1/ took 37.011141638s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/pods/proxy-service-g265q-5inmz:160/ took 37.036865297s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/http:proxy-service-g265q:80/ took 37.036852398s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-4o9n2/pods/proxy-service-g265q-5inmz:160/proxy/ took 37.038008146s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/pods/http:proxy-service-g265q-5inmz:160/ took 37.084276294s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/http:proxy-service-g265q:portname1/proxy/ took 37.084392798s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/pods/proxy-service-g265q-5inmz:1080/ took 37.084544644s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-4o9n2/pods/proxy-service-g265q-5inmz/proxy/ took 41.32336983s > 30s
1: path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/proxy-service-g265q:portname1/proxy/ took 42.17190356s > 30s
1: path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/proxy-service-g265q:80/ took 42.500755138s > 30s
2 (0; 47.902256ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/proxy-service-g265q:portname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-g265q:portname2" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 49.820485ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/http:proxy-service-g265q:portname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-g265q:portname2" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 51.218094ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/proxy-service-g265q:portname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-g265q:portname1" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 51.514924ms): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/proxy-service-g265q:portname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-g265q:portname2" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 52.289609ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:tlsportname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-g265q:tlsportname1" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 54.295786ms): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-g265q:tlsportname2" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 54.923093ms): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-g265q:tlsportname1" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 55.203663ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:tlsportname2/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-g265q:tlsportname2" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 138.185572ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/http:proxy-service-g265q:80/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-g265q:80" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 138.464196ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/proxy-service-g265q:80/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-g265q:80" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 146.716812ms): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/http:proxy-service-g265q:portname2/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-g265q:portname2" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 194.9194ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/http:proxy-service-g265q:81/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-g265q:81" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 196.915393ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/proxy-service-g265q:81/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-g265q:81" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 198.660959ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:444/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-g265q:444" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 199.656492ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/http:proxy-service-g265q:portname1/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-g265q:portname1" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 201.770649ms): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/http:proxy-service-g265q:portname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "http:proxy-service-g265q:portname1" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 202.417123ms): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/proxy-service-g265q:portname1/proxy/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "proxy-service-g265q:portname1" Reason:ServiceUnavailable Details:nil Code:503}
2 (0; 204.44931ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:443/ gave status error: {TypeMeta:{Kind:Status APIVersion:v1} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:no endpoints available for service "https:proxy-service-g265q:443" Reason:ServiceUnavailable Details:nil Code:503}
3 (0; 10.112720827s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:462/' }],RetryAfterSeconds:0,} Code:503}
3 (0; 10.170450667s): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/pods/https:proxy-service-g265q-5inmz:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:462/' }],RetryAfterSeconds:0,} Code:503}
3 (0; 10.204904769s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:462/' }],RetryAfterSeconds:0,} Code:503}
17 (0; 10.11243147s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:460/' }],RetryAfterSeconds:0,} Code:503}
17 (0; 10.17608293s): path /api/v1/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:460/' }],RetryAfterSeconds:0,} Code:503}
17 (0; 10.194231772s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:462/' }],RetryAfterSeconds:0,} Code:503}
17 (0; 10.214298955s): path /api/v1/proxy/namespaces/e2e-tests-proxy-4o9n2/services/https:proxy-service-g265q:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://10.48.2.26:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'net/http: TLS handshake timeout'
Trying to reach: 'https://10.48.2.26:462/' }],RetryAfterSeconds:0,} Code:503}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:278

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <*errors.errorString | 0xc8206560d0>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1510

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
Expected error:
    <*errors.errorString | 0xc820229d80>: {
        s: "expected container test-container-1 success: pod 'pod-host-path-test' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 13:41:40 -0800 PST FinishedAt:2016-12-14 13:41:40 -0800 PST ContainerID:docker://9ee520f4c681be4d6c0ae6049350fc0abadd7989e57507586c9db13a7556d93d}",
    }
    expected container test-container-1 success: pod 'pod-host-path-test' terminated with failure: &{ExitCode:128 Signal:0 Reason:ContainerCannotRun Message: StartedAt:2016-12-14 13:41:40 -0800 PST FinishedAt:2016-12-14 13:41:40 -0800 PST ContainerID:docker://9ee520f4c681be4d6c0ae6049350fc0abadd7989e57507586c9db13a7556d93d}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #32122 #38040

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/2368/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Dec 15 05:05:09.612: Failed to create netserver-0 pod: the server cannot complete the requested operation at this time, try again later (post pods)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:491

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:118
Expected error:
    <*errors.StatusError | 0xc82093d700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server cannot complete the requested operation at this time, try again later (get pods netserver-0)",
            Reason: "ServerTimeout",
            Details: {
                Name: "netserver-0",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The  operation against  could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 504,
        },
    }
    the server cannot complete the requested operation at this time, try again later (get pods netserver-0)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:463

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc820884290>: {
        s: "failed to get logs from pod-f6bc8121-c2c6-11e6-932d-0242ac11000a for test-container: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-zhm0t/pods/pod-f6bc8121-c2c6-11e6-932d-0242ac11000a/log?container=test-container&previous=false\\\"\") has prevented the request from succeeding (get pods pod-f6bc8121-c2c6-11e6-932d-0242ac11000a)",
    }
    failed to get logs from pod-f6bc8121-c2c6-11e6-932d-0242ac11000a for test-container: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-zhm0t/pods/pod-f6bc8121-c2c6-11e6-932d-0242ac11000a/log?container=test-container&previous=false\"") has prevented the request from succeeding (get pods pod-f6bc8121-c2c6-11e6-932d-0242ac11000a)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #30851

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc820b94200>: {
        s: "failed to get logs from pod-f6c0e8a4-c2c6-11e6-b9f9-0242ac11000a for test-container: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-r8v5i/pods/pod-f6c0e8a4-c2c6-11e6-b9f9-0242ac11000a/log?container=test-container&previous=false\\\"\") has prevented the request from succeeding (get pods pod-f6c0e8a4-c2c6-11e6-b9f9-0242ac11000a)",
    }
    failed to get logs from pod-f6c0e8a4-c2c6-11e6-b9f9-0242ac11000a for test-container: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-r8v5i/pods/pod-f6c0e8a4-c2c6-11e6-b9f9-0242ac11000a/log?container=test-container&previous=false\"") has prevented the request from succeeding (get pods pod-f6c0e8a4-c2c6-11e6-b9f9-0242ac11000a)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #37500

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:929
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx-slim:0.7 --namespace=e2e-tests-kubectl-bpzzg] []  <nil>  Error from server: the server cannot complete the requested operation at this time, try again later (post deployments.extensions)\n [] <nil> 0xc820994340 exit status 1 <nil> true [0xc820988768 0xc820988780 0xc820988798] [0xc820988768 0xc820988780 0xc820988798] [0xc820988778 0xc820988790] [0xafa570 0xafa570] 0xc820aa3e60}:\nCommand stdout:\n\nstderr:\nError from server: the server cannot complete the requested operation at this time, try again later (post deployments.extensions)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx-slim:0.7 --namespace=e2e-tests-kubectl-bpzzg] []  <nil>  Error from server: the server cannot complete the requested operation at this time, try again later (post deployments.extensions)
     [] <nil> 0xc820994340 exit status 1 <nil> true [0xc820988768 0xc820988780 0xc820988798] [0xc820988768 0xc820988780 0xc820988798] [0xc820988778 0xc820988790] [0xafa570 0xafa570] 0xc820aa3e60}:
    Command stdout:
    
    stderr:
    Error from server: the server cannot complete the requested operation at this time, try again later (post deployments.extensions)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:88
Expected error:
    <*errors.errorString | 0xc820b861d0>: {
        s: "failed to get logs from pod-secrets-f7466ec0-c2c6-11e6-b391-0242ac11000a for secret-volume-test: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-secrets-jla69/pods/pod-secrets-f7466ec0-c2c6-11e6-b391-0242ac11000a/log?container=secret-volume-test&previous=false\\\"\") has prevented the request from succeeding (get pods pod-secrets-f7466ec0-c2c6-11e6-b391-0242ac11000a)",
    }
    failed to get logs from pod-secrets-f7466ec0-c2c6-11e6-b391-0242ac11000a for secret-volume-test: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-secrets-jla69/pods/pod-secrets-f7466ec0-c2c6-11e6-b391-0242ac11000a/log?container=secret-volume-test&previous=false\"") has prevented the request from succeeding (get pods pod-secrets-f7466ec0-c2c6-11e6-b391-0242ac11000a)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #29221

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:793
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-d8nj1] []  0xc82061ca40  Error from server: the server cannot complete the requested operation at this time, try again later\n [] <nil> 0xc82061d240 exit status 1 <nil> true [0xc820c72000 0xc820c72028 0xc820c72038] [0xc820c72000 0xc820c72028 0xc820c72038] [0xc820c72008 0xc820c72020 0xc820c72030] [0xafa410 0xafa570 0xafa570] 0xc820590420}:\nCommand stdout:\n\nstderr:\nError from server: the server cannot complete the requested operation at this time, try again later\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-d8nj1] []  0xc82061ca40  Error from server: the server cannot complete the requested operation at this time, try again later
     [] <nil> 0xc82061d240 exit status 1 <nil> true [0xc820c72000 0xc820c72028 0xc820c72038] [0xc820c72000 0xc820c72028 0xc820c72038] [0xc820c72008 0xc820c72020 0xc820c72030] [0xafa410 0xafa570 0xafa570] 0xc820590420}:
    Command stdout:
    
    stderr:
    Error from server: the server cannot complete the requested operation at this time, try again later
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:243
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-kpbrg] []  0xc820833f40  Error from server: the server cannot complete the requested operation at this time, try again later\n [] <nil> 0xc820990720 exit status 1 <nil> true [0xc82017cc08 0xc82017cc30 0xc82017cc40] [0xc82017cc08 0xc82017cc30 0xc82017cc40] [0xc82017cc10 0xc82017cc28 0xc82017cc38] [0xafa410 0xafa570 0xafa570] 0xc820c51740}:\nCommand stdout:\n\nstderr:\nError from server: the server cannot complete the requested operation at this time, try again later\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-kpbrg] []  0xc820833f40  Error from server: the server cannot complete the requested operation at this time, try again later
     [] <nil> 0xc820990720 exit status 1 <nil> true [0xc82017cc08 0xc82017cc30 0xc82017cc40] [0xc82017cc08 0xc82017cc30 0xc82017cc40] [0xc82017cc10 0xc82017cc28 0xc82017cc38] [0xafa410 0xafa570 0xafa570] 0xc820c51740}:
    Command stdout:
    
    stderr:
    Error from server: the server cannot complete the requested operation at this time, try again later
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc820826c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server cannot complete the requested operation at this time, try again later (post pods)",
            Reason: "ServerTimeout",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The  operation against  could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 504,
        },
    }
    the server cannot complete the requested operation at this time, try again later (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Issues about this test specifically: #37071

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:58
Expected error:
    <*errors.errorString | 0xc820ac81e0>: {
        s: "failed to get logs from downwardapi-volume-f6bc6859-c2c6-11e6-92ac-0242ac11000a for client-container: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-l1zvk/pods/downwardapi-volume-f6bc6859-c2c6-11e6-92ac-0242ac11000a/log?container=client-container&previous=false\\\"\") has prevented the request from succeeding (get pods downwardapi-volume-f6bc6859-c2c6-11e6-92ac-0242ac11000a)",
    }
    failed to get logs from downwardapi-volume-f6bc6859-c2c6-11e6-92ac-0242ac11000a for client-container: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-l1zvk/pods/downwardapi-volume-f6bc6859-c2c6-11e6-92ac-0242ac11000a/log?container=client-container&previous=false\"") has prevented the request from succeeding (get pods downwardapi-volume-f6bc6859-c2c6-11e6-92ac-0242ac11000a)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Issues about this test specifically: #36300

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1212
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-89ekt run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820c033c0  Error from server: the server cannot complete the requested operation at this time, try again later (get pods)\n [] <nil> 0xc820c03d20 exit status 1 <nil> true [0xc820ae0008 0xc820ae0030 0xc820ae0040] [0xc820ae0008 0xc820ae0030 0xc820ae0040] [0xc820ae0010 0xc820ae0028 0xc820ae0038] [0xafa410 0xafa570 0xafa570] 0xc8203f48a0}:\nCommand stdout:\n\nstderr:\nError from server: the server cannot complete the requested operation at this time, try again later (get pods)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-89ekt run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820c033c0  Error from server: the server cannot complete the requested operation at this time, try again later (get pods)
     [] <nil> 0xc820c03d20 exit status 1 <nil> true [0xc820ae0008 0xc820ae0030 0xc820ae0040] [0xc820ae0008 0xc820ae0030 0xc820ae0040] [0xc820ae0010 0xc820ae0028 0xc820ae0038] [0xafa410 0xafa570 0xafa570] 0xc8203f48a0}:
    Command stdout:
    
    stderr:
    Error from server: the server cannot complete the requested operation at this time, try again later (get pods)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 15 05:05:14.649: Couldn't delete ns: "e2e-tests-services-qowyw": the server cannot complete the requested operation at this time, try again later (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (get horizontalpodautoscalers.autoscaling)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc8209be690), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc820b15f00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server cannot complete the requested operation at this time, try again later (post pods)",
            Reason: "ServerTimeout",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The  operation against  could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 504,
        },
    }
    the server cannot complete the requested operation at this time, try again later (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Issues about this test specifically: #34226

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1185
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=gcr.io/google_containers/nginx-slim:0.7 --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6le22] []  <nil>  Error from server: the server cannot complete the requested operation at this time, try again later (post pods)\n [] <nil> 0xc820b277c0 exit status 1 <nil> true [0xc8200382a0 0xc8200382b8 0xc8200382d0] [0xc8200382a0 0xc8200382b8 0xc8200382d0] [0xc8200382b0 0xc8200382c8] [0xafa570 0xafa570] 0xc82080bc80}:\nCommand stdout:\n\nstderr:\nError from server: the server cannot complete the requested operation at this time, try again later (post pods)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=gcr.io/google_containers/nginx-slim:0.7 --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6le22] []  <nil>  Error from server: the server cannot complete the requested operation at this time, try again later (post pods)
     [] <nil> 0xc820b277c0 exit status 1 <nil> true [0xc8200382a0 0xc8200382b8 0xc8200382d0] [0xc8200382a0 0xc8200382b8 0xc8200382d0] [0xc8200382b0 0xc8200382c8] [0xafa570 0xafa570] 0xc82080bc80}:
    Command stdout:
    
    stderr:
    Error from server: the server cannot complete the requested operation at this time, try again later (post pods)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:73
Expected error:
    <*errors.StatusError | 0xc82027b780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server cannot complete the requested operation at this time, try again later (get pods test-webserver-f6bc3d8e-c2c6-11e6-833d-0242ac11000a)",
            Reason: "ServerTimeout",
            Details: {
                Name: "test-webserver-f6bc3d8e-c2c6-11e6-833d-0242ac11000a",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The  operation against  could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 504,
        },
    }
    the server cannot complete the requested operation at this time, try again later (get pods test-webserver-f6bc3d8e-c2c6-11e6-833d-0242ac11000a)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53

Issues about this test specifically: #29521

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:63
Expected error:
    <*errors.errorString | 0xc820bea380>: {
        s: "failed to get logs from pod-configmaps-f73ca9eb-c2c6-11e6-bc4e-0242ac11000a for configmap-volume-test: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-configmap-0z16t/pods/pod-configmaps-f73ca9eb-c2c6-11e6-bc4e-0242ac11000a/log?container=configmap-volume-test&previous=false\\\"\") has prevented the request from succeeding (get pods pod-configmaps-f73ca9eb-c2c6-11e6-bc4e-0242ac11000a)",
    }
    failed to get logs from pod-configmaps-f73ca9eb-c2c6-11e6-bc4e-0242ac11000a for configmap-volume-test: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-0z16t/pods/pod-configmaps-f73ca9eb-c2c6-11e6-bc4e-0242ac11000a/log?container=configmap-volume-test&previous=false\"") has prevented the request from succeeding (get pods pod-configmaps-f73ca9eb-c2c6-11e6-bc4e-0242ac11000a)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2307

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
Expected error:
    <*errors.StatusError | 0xc820a38100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/proxy/nodes/gke-bootstrap-e2e-default-pool-c889156c-7muu:10250/logs/\\\"\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/proxy/nodes/gke-bootstrap-e2e-default-pool-c889156c-7muu:10250/logs/\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/proxy/nodes/gke-bootstrap-e2e-default-pool-c889156c-7muu:10250/logs/\"") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:332

Issues about this test specifically: #32936

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1137
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=gcr.io/google_containers/nginx-slim:0.7 --namespace=e2e-tests-kubectl-gv9a6] []  <nil>  Error from server: the server cannot complete the requested operation at this time, try again later (post pods)\n [] <nil> 0xc820811d60 exit status 1 <nil> true [0xc820093128 0xc820093140 0xc820093158] [0xc820093128 0xc820093140 0xc820093158] [0xc820093138 0xc820093150] [0xafa570 0xafa570] 0xc820c38d80}:\nCommand stdout:\n\nstderr:\nError from server: the server cannot complete the requested operation at this time, try again later (post pods)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=gcr.io/google_containers/nginx-slim:0.7 --namespace=e2e-tests-kubectl-gv9a6] []  <nil>  Error from server: the server cannot complete the requested operation at this time, try again later (post pods)
     [] <nil> 0xc820811d60 exit status 1 <nil> true [0xc820093128 0xc820093140 0xc820093158] [0xc820093128 0xc820093140 0xc820093158] [0xc820093138 0xc820093150] [0xafa570 0xafa570] 0xc820c38d80}:
    Command stdout:
    
    stderr:
    Error from server: the server cannot complete the requested operation at this time, try again later (post pods)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 15 05:05:09.339: Couldn't delete ns: "e2e-tests-var-expansion-1xa5y": the server cannot complete the requested operation at this time, try again later (delete namespaces e2e-tests-var-expansion-1xa5y) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server cannot complete the requested operation at this time, try again later (delete namespaces e2e-tests-var-expansion-1xa5y)", Reason:"ServerTimeout", Details:(*unversioned.StatusDetails)(0xc820d04820), Code:504}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-5hksv] []  0xc820b56a60  Error from server: the server cannot complete the requested operation at this time, try again later\n [] <nil> 0xc820b57260 exit status 1 <nil> true [0xc8209a2000 0xc8209a2028 0xc8209a2038] [0xc8209a2000 0xc8209a2028 0xc8209a2038] [0xc8209a2008 0xc8209a2020 0xc8209a2030] [0xafa410 0xafa570 0xafa570] 0xc820b60c00}:\nCommand stdout:\n\nstderr:\nError from server: the server cannot complete the requested operation at this time, try again later\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-5hksv] []  0xc820b56a60  Error from server: the server cannot complete the requested operation at this time, try again later
     [] <nil> 0xc820b57260 exit status 1 <nil> true [0xc8209a2000 0xc8209a2028 0xc8209a2038] [0xc8209a2000 0xc8209a2028 0xc8209a2038] [0xc8209a2008 0xc8209a2020 0xc8209a2030] [0xafa410 0xafa570 0xafa570] 0xc820b60c00}:
    Command stdout:
    
    stderr:
    Error from server: the server cannot complete the requested operation at this time, try again later
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-2kd37] []  0xc820b849e0  Error from server: the server cannot complete the requested operation at this time, try again later\n [] <nil> 0xc820b851e0 exit status 1 <nil> true [0xc82016ea68 0xc82016ea90 0xc82016eaa0] [0xc82016ea68 0xc82016ea90 0xc82016eaa0] [0xc82016ea70 0xc82016ea88 0xc82016ea98] [0xafa410 0xafa570 0xafa570] 0xc820525d40}:\nCommand stdout:\n\nstderr:\nError from server: the server cannot complete the requested operation at this time, try again later\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-2kd37] []  0xc820b849e0  Error from server: the server cannot complete the requested operation at this time, try again later
     [] <nil> 0xc820b851e0 exit status 1 <nil> true [0xc82016ea68 0xc82016ea90 0xc82016eaa0] [0xc82016ea68 0xc82016ea90 0xc82016eaa0] [0xc82016ea70 0xc82016ea88 0xc82016ea98] [0xafa410 0xafa570 0xafa570] 0xc820525d40}:
    Command stdout:
    
    stderr:
    Error from server: the server cannot complete the requested operation at this time, try again later
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:79

Issues about this test specifically: #28346

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1104
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=gcr.io/google_containers/nginx-slim:0.7 --namespace=e2e-tests-kubectl-bbppc] []  <nil>  Error from server: the server cannot complete the requested operation at this time, try again later (post jobs.batch)\n [] <nil> 0xc82078dc20 exit status 1 <nil> true [0xc820512008 0xc820512020 0xc820512038] [0xc820512008 0xc820512020 0xc820512038] [0xc820512018 0xc820512030] [0xafa570 0xafa570] 0xc820a1a120}:\nCommand stdout:\n\nstderr:\nError from server: the server cannot complete the requested operation at this time, try again later (post jobs.batch)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.188.19 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=gcr.io/google_containers/nginx-slim:0.7 --namespace=e2e-tests-kubectl-bbppc] []  <nil>  Error from server: the server cannot complete the requested operation at this time, try again later (post jobs.batch)
     [] <nil> 0xc82078dc20 exit status 1 <nil> true [0xc820512008 0xc820512020 0xc820512038] [0xc820512008 0xc820512020 0xc820512038] [0xc820512018 0xc820512030] [0xafa570 0xafa570] 0xc820a1a120}:
    Command stdout:
    
    stderr:
    Error from server: the server cannot complete the requested operation at this time, try again later (post jobs.batch)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2207

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/2399/

Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
Expected
    <time.Duration>: 58253876431
to be <
    <time.Duration>: 30000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:334

Issues about this test specifically: #32936

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected error:
    <*errors.errorString | 0xc82000ec30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:127

Issues about this test specifically: #28346

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:235
Dec 15 16:04:45.167: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:199

Issues about this test specifically: #26955

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:188
Expected error:
    <*errors.errorString | 0xc8201a8a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:101

Issues about this test specifically: #36564

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:359
Dec 15 15:56:21.315: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:312

Issues about this test specifically: #27673

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
validating pre-stop.
Expected error:
    <*errors.errorString | 0xc8200f17c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:159

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:114
Tail (99 percentile) latency should be less than 50s
50, 90, 99 percentiles: 2.083836272s 9.064039335s 1m32.838379049s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:112

Issues about this test specifically: #30632

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64
Expected
    <time.Duration>: 63996360052
to be <
    <time.Duration>: 30000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:334

Issues about this test specifically: #36242

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/2566/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Dec 17 20:29:35.710: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1513

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200e77c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:542
Dec 17 20:19:44.521: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:539

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:50
Expected error:
    <*errors.errorString | 0xc820b2f320>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-12-17 20:18:51 -0800 PST FinishedAt:2016-12-17 20:19:21 -0800 PST ContainerID:docker://a4ef738a8d2b5f301717d6b91296e7580f9afffd5ac23a15244183ffcb4e8a44}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-12-17 20:18:51 -0800 PST FinishedAt:2016-12-17 20:19:21 -0800 PST ContainerID:docker://a4ef738a8d2b5f301717d6b91296e7580f9afffd5ac23a15244183ffcb4e8a44}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc8200e77c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/2768/
Multiple broken tests:

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:160
Timed out after 300.001s.
Expected
    <string>: content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    
to contain substring
    <string>: value-2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:159

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:233
getting pod 
Expected error:
    <*url.Error | 0xc8208b8a80>: {
        Op: "Get",
        URL: "https://104.196.143.105/api/v1/namespaces/e2e-tests-container-probe-41ygv/pods/liveness-http",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhďi",
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.196.143.105/api/v1/namespaces/e2e-tests-container-probe-41ygv/pods/liveness-http: dial tcp 104.196.143.105:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:350

Issues about this test specifically: #30342 #31350

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:175
getting pod 
Expected error:
    <*url.Error | 0xc820ce64b0>: {
        Op: "Get",
        URL: "https://104.196.143.105/api/v1/namespaces/e2e-tests-container-probe-h6ol7/pods/liveness-http",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhďi",
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.196.143.105/api/v1/namespaces/e2e-tests-container-probe-h6ol7/pods/liveness-http: dial tcp 104.196.143.105:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:350

Issues about this test specifically: #38511

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:120
getting pod 
Expected error:
    <*url.Error | 0xc820f37830>: {
        Op: "Get",
        URL: "https://104.196.143.105/api/v1/namespaces/e2e-tests-container-probe-3k6aq/pods/liveness-exec",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhďi",
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.196.143.105/api/v1/namespaces/e2e-tests-container-probe-3k6aq/pods/liveness-exec: dial tcp 104.196.143.105:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:350

Issues about this test specifically: #30264

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/3014/
Multiple broken tests:

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:50
Expected error:
    <*errors.errorString | 0xc8209d51a0>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-12-23 16:06:57 -0800 PST FinishedAt:2016-12-23 16:07:27 -0800 PST ContainerID:docker://8cfe67e8b17fc90ba0c8930e6d15bb7076cbd2d1d562f00f91a14ee160ce3877}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-12-23 16:06:57 -0800 PST FinishedAt:2016-12-23 16:07:27 -0800 PST ContainerID:docker://8cfe67e8b17fc90ba0c8930e6d15bb7076cbd2d1d562f00f91a14ee160ce3877}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc8201016a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8201a6760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Dec 23 16:16:33.707: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1513

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:542
Dec 23 16:07:41.079: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:539

Issues about this test specifically: #28420 #36122

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/3188/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:104
Expected error:
    <*errors.errorString | 0xc8200e77c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32830

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
Expected error:
    <*errors.errorString | 0xc8200e97c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:108

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc820115a60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:118
Expected error:
    <*errors.errorString | 0xc8200d97c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #35283 #36867

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:120
starting pod liveness-exec in namespace e2e-tests-container-probe-05m7c
Expected error:
    <*errors.errorString | 0xc8201bc760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:334

Issues about this test specifically: #30264

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:243
Dec 25 20:14:16.231: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2137

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Dec 25 20:19:18.280: Couldn't delete ns: "e2e-tests-kubectl-cudts": namespace e2e-tests-kubectl-cudts was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-kubectl-cudts was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Expected error:
    <*errors.errorString | 0xc8201016a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #33631 #33995 #34970

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/3602/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820562e10>: {
        Op: "Get",
        URL: "https://104.196.220.177/api/v1/watch/namespaces/e2e-tests-emptydir-luafx/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc4ܱ",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.220.177/api/v1/watch/namespaces/e2e-tests-emptydir-luafx/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.220.177:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #37071

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820ba6990>: {
        Op: "Get",
        URL: "https://104.196.220.177/api/v1/watch/namespaces/e2e-tests-kubectl-c3u5m/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc4ܱ",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.220.177/api/v1/watch/namespaces/e2e-tests-kubectl-c3u5m/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.220.177:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:233
getting pod  in namespace e2e-tests-container-probe-c9up3
Expected error:
    <*url.Error | 0xc820caa360>: {
        Op: "Get",
        URL: "https://104.196.220.177/api/v1/namespaces/e2e-tests-container-probe-c9up3/pods/liveness-http",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc4ܱ",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.220.177/api/v1/namespaces/e2e-tests-container-probe-c9up3/pods/liveness-http: dial tcp 104.196.220.177:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:340

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820a09e00>: {
        Op: "Get",
        URL: "https://104.196.220.177/api/v1/watch/namespaces/e2e-tests-nettest-dlwa4/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc4ܱ",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.220.177/api/v1/watch/namespaces/e2e-tests-nettest-dlwa4/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.220.177:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #32375

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Dec 30 20:52:30.594: Failed to create pod: Post https://104.196.220.177/api/v1/namespaces/e2e-tests-events-ccofk/pods: dial tcp 104.196.220.177:443: i/o timeout
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:70

Issues about this test specifically: #28346

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820abc510>: {
        Op: "Get",
        URL: "https://104.196.220.177/api/v1/watch/namespaces/e2e-tests-secrets-ehl0y/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc4ܱ",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.220.177/api/v1/watch/namespaces/e2e-tests-secrets-ehl0y/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.220.177:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*url.Error | 0xc820caa540>: {
        Op: "Get",
        URL: "https://104.196.220.177/api/v1/watch/namespaces/e2e-tests-emptydir-5b11v/serviceaccounts?fieldSelector=metadata.name%3Ddefault",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc4ܱ",
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://104.196.220.177/api/v1/watch/namespaces/e2e-tests-emptydir-5b11v/serviceaccounts?fieldSelector=metadata.name%3Ddefault: dial tcp 104.196.220.177:443: i/o timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230

Issues about this test specifically: #36183

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/3998/
Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:235
Jan  4 12:22:39.919: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:199

Issues about this test specifically: #26955

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:302
Jan  4 12:21:27.652: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:244

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Jan  4 12:22:39.664: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2137

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:97
Expected error:
    <*errors.errorString | 0xc8200e57c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32375

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:175
starting pod liveness-http in namespace e2e-tests-container-probe-p15ex
Expected error:
    <*errors.errorString | 0xc8201aead0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:334

Issues about this test specifically: #38511

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/4161/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan  6 11:56:04.842: Couldn't delete ns: "e2e-tests-emptydir-k7rvo": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-emptydir-k7rvo/replicationcontrollers\"") has prevented the request from succeeding (get replicationcontrollers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-emptydir-k7rvo/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationcontrollers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82067db30), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #31400

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan  6 11:56:04.842: Couldn't delete ns: "e2e-tests-emptydir-om6ms": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-emptydir-om6ms/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-emptydir-om6ms/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82097b130), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #37071

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan  6 11:56:04.842: Couldn't delete ns: "e2e-tests-emptydir-p5bn1": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-p5bn1/configmaps\"") has prevented the request from succeeding (get configmaps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-p5bn1/configmaps\\\"\") has prevented the request from succeeding (get configmaps)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a62960), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #33987

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/4269/
Multiple broken tests:

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:37
Expected error:
    <*errors.errorString | 0xc82094ecd0>: {
        s: "expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-367c0f0a-d558-11e6-a214-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-367c0f0a-d558-11e6-a214-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #29052

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc820a65a70>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-6e74d747-d558-11e6-b7e1-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-6e74d747-d558-11e6-b7e1-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #37500

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:344
Expected error:
    <*errors.errorString | 0xc820831e00>: {
        s: "expected container secret-env-test success: gave up waiting for pod 'pod-secrets-365f3537-d558-11e6-9634-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected container secret-env-test success: gave up waiting for pod 'pod-secrets-365f3537-d558-11e6-9634-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #32025 #36823

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Expected error:
    <*errors.errorString | 0xc8200e77c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:42
Expected error:
    <*errors.errorString | 0xc8208bd240>: {
        s: "expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-36242509-d558-11e6-89b5-0242ac110005' to be 'success or failure' after 5m0s",
    }
    expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-36242509-d558-11e6-89b5-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #34827

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:235
Jan  7 20:14:11.851: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:199

Issues about this test specifically: #26955

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/4903/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:294
Expected error:
    <*errors.errorString | 0xc820581980>: {
        s: "expected container secret-volume-test success: gave up waiting for pod 'pod-secrets-d05c77f9-dbbd-11e6-adec-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected container secret-volume-test success: gave up waiting for pod 'pod-secrets-d05c77f9-dbbd-11e6-adec-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:97
Expected error:
    <*errors.errorString | 0xc8201a2760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32375

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:131
Expected error:
    <*errors.errorString | 0xc820914f40>: {
        s: "expected container dapi-container success: gave up waiting for pod 'var-expansion-d0f54f9d-dbbd-11e6-b909-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected container dapi-container success: gave up waiting for pod 'var-expansion-d0f54f9d-dbbd-11e6-b909-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #28503

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85
Expected error:
    <*errors.errorString | 0xc820af6710>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-d0c72dc3-dbbd-11e6-a6c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-d0c72dc3-dbbd-11e6-a6c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2319

Issues about this test specifically: #34658

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Expected error:
    <*errors.errorString | 0xc82019c880>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:118
Expected error:
    <*errors.errorString | 0xc8201a2760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:235
Jan 15 23:36:37.189: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:199

Issues about this test specifically: #26955

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:745
Jan 15 23:36:39.642: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:202

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc820174aa0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/5190/
Multiple broken tests:

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 19 23:30:09.031: Couldn't delete ns: "e2e-tests-container-probe-z32hc": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-z32hc/daemonsets\"") has prevented the request from succeeding (get daemonsets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-z32hc/daemonsets\\\"\") has prevented the request from succeeding (get daemonsets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82080eb40), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #28084

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 19 23:29:48.566: Couldn't delete ns: "e2e-tests-svc-latency-akqxr": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-svc-latency-akqxr/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-svc-latency-akqxr/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820830460), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #30632

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200e77c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/5373/
Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 22 08:44:06.239: Couldn't delete ns: "e2e-tests-port-forwarding-qxiwo": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-port-forwarding-qxiwo/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-port-forwarding-qxiwo/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820acd950), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #27673

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 22 08:44:00.059: Couldn't delete ns: "e2e-tests-proxy-z7v9k": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-proxy-z7v9k/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-proxy-z7v9k/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8208bec30), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #35422

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 22 08:44:15.470: Couldn't delete ns: "e2e-tests-nettest-6ymv1": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-nettest-6ymv1/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-nettest-6ymv1/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a343c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32830

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 22 08:44:09.885: Couldn't delete ns: "e2e-tests-container-probe-2kkvs": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-container-probe-2kkvs/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-container-probe-2kkvs/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820b98aa0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #37914

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 22 08:44:04.774: Couldn't delete ns: "e2e-tests-kubectl-1owf7": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-1owf7/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-1owf7/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820c11130), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc8203b1c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-x9d7a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-x9d7a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-x9d7a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #31400

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820b9be00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-a1b7w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-a1b7w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-a1b7w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #37500

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820d09b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-proxy-1qsc5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-1qsc5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-1qsc5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 22 08:43:59.371: Couldn't delete ns: "e2e-tests-kubectl-cmcgq": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-cmcgq/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-kubectl-cmcgq/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8208f5180), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc820b3e800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-ttqw6/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-ttqw6/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-ttqw6/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50

Issues about this test specifically: #30851

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1212
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.29.17 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-sk103 run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc8207e04e0  Error from server: an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-kubectl-sk103/jobs\\\"\") has prevented the request from succeeding (post jobs.batch)\n [] <nil> 0xc8207e0da0 exit status 1 <nil> true [0xc820ca2030 0xc820ca2080 0xc820ca20a0] [0xc820ca2030 0xc820ca2080 0xc820ca20a0] [0xc820ca2040 0xc820ca2070 0xc820ca2090] [0xafacc0 0xafae20 0xafae20] 0xc820c3aea0}:\nCommand stdout:\n\nstderr:\nError from server: an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-kubectl-sk103/jobs\\\"\") has prevented the request from succeeding (post jobs.batch)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.29.17 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-sk103 run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc8207e04e0  Error from server: an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-kubectl-sk103/jobs\"") has prevented the request from succeeding (post jobs.batch)
     [] <nil> 0xc8207e0da0 exit status 1 <nil> true [0xc820ca2030 0xc820ca2080 0xc820ca20a0] [0xc820ca2030 0xc820ca2080 0xc820ca20a0] [0xc820ca2040 0xc820ca2070 0xc820ca2090] [0xafacc0 0xafae20 0xafae20] 0xc820c3aea0}:
    Command stdout:
    
    stderr:
    Error from server: an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-kubectl-sk103/jobs\"") has prevented the request from succeeding (post jobs.batch)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 22 08:44:04.789: Couldn't delete ns: "e2e-tests-proxy-n759s": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-proxy-n759s/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-proxy-n759s/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8205d34a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 22 08:44:31.380: Couldn't delete ns: "e2e-tests-nettest-2s6y2": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-nettest-2s6y2/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-nettest-2s6y2/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820c460f0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 22 08:44:13.320: Couldn't delete ns: "e2e-tests-container-probe-cm9eo": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-container-probe-cm9eo\"") has prevented the request from succeeding (delete namespaces e2e-tests-container-probe-cm9eo) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-container-probe-cm9eo\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-container-probe-cm9eo)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820825400), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820948d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-n39pc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-n39pc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-n39pc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
    <*errors.StatusError | 0xc820dc4480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-proxy-m0diw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-m0diw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-m0diw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223

Issues about this test specifically: #36242

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Jan 22 08:44:13.708: Couldn't delete ns: "e2e-tests-nettest-upgzf": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-nettest-upgzf/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-nettest-upgzf/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8202ae0a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Issues about this test specifically: #32375

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/5563/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:104
Expected error:
    <*errors.errorString | 0xc8200eb7c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32830

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Expected error:
    <*errors.errorString | 0xc8201c2760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200f3730>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/5633/
Multiple broken tests:

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:364
Expected error:
    <*errors.errorString | 0xc8200ff6a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Issues about this test specifically: #36649

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:319
Expected error:
    <*errors.errorString | 0xc8200e77c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Issues about this test specifically: #35793

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:188
Expected error:
    <*errors.errorString | 0xc8200e57c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:101

Issues about this test specifically: #36564

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/5647/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:118
Expected error:
    <*errors.errorString | 0xc8200e77c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:104
Expected error:
    <*errors.errorString | 0xc8201ba760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32830

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:97
Expected error:
    <*errors.errorString | 0xc820182b40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #32375

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:111
Expected error:
    <*errors.errorString | 0xc820190b40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:461

Issues about this test specifically: #33631 #33995 #34970

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/5788/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109
Expected error:
    <*errors.errorString | 0xc420c90960>: {
        s: "expected pod \"pod-6981a1c3-e564-11e6-b9bf-0242ac110007\" success: gave up waiting for pod 'pod-6981a1c3-e564-11e6-b9bf-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-6981a1c3-e564-11e6-b9bf-0242ac110007" success: gave up waiting for pod 'pod-6981a1c3-e564-11e6-b9bf-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37071

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
wait for pod "pod-secrets-60c3284c-e564-11e6-b410-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42042f1c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203fd300>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc420bdb7c0>: {
        s: "expected pod \"pod-configmaps-5f1a65bc-e564-11e6-b747-0242ac110007\" success: gave up waiting for pod 'pod-configmaps-5f1a65bc-e564-11e6-b747-0242ac110007' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-5f1a65bc-e564-11e6-b747-0242ac110007" success: gave up waiting for pod 'pod-configmaps-5f1a65bc-e564-11e6-b747-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #27245

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
wait for pod "pod-host-path-test" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42036ed50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #32122 #38040

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124
Expected error:
    <*errors.errorString | 0xc42038ec40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:64
wait for pod "pod-configmaps-53b80cc5-e564-11e6-8f55-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420350450>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #35790

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:150
wait for pod "pod-secrets-53301c7d-e564-11e6-ae8b-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42036c960>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203749a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
wait for pod "downwardapi-volume-57de2ce2-e564-11e6-9136-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420349320>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #37423

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/5940/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.42.168 --kubeconfig=/workspace/.kube/config get pods update-demo-nautilus-xwfx1 -o template --template={{if (exists . \"status\" \"containerStatuses\")}}{{range .status.containerStatuses}}{{if eq .name \"update-demo\"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p9n65] []  <nil>  Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout\n [] <nil> 0xc4209d7830 exit status 1 <nil> <nil> true [0xc4202f0418 0xc4202f0470 0xc4202f0528] [0xc4202f0418 0xc4202f0470 0xc4202f0528] [0xc4202f0440 0xc4202f04d8] [0x9728b0 0x9728b0] 0xc4205db2c0 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.42.168 --kubeconfig=/workspace/.kube/config get pods update-demo-nautilus-xwfx1 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p9n65] []  <nil>  Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout
     [] <nil> 0xc4209d7830 exit status 1 <nil> <nil> true [0xc4202f0418 0xc4202f0470 0xc4202f0528] [0xc4202f0418 0xc4202f0470 0xc4202f0528] [0xc4202f0440 0xc4202f04d8] [0x9728b0 0x9728b0] 0xc4205db2c0 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2067

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.42.168 --kubeconfig=/workspace/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-xxrtk] []  <nil>  Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout\n [] <nil> 0xc420ec54d0 exit status 1 <nil> <nil> true [0xc4200939f0 0xc420093a08 0xc420093a20] [0xc4200939f0 0xc420093a08 0xc420093a20] [0xc420093a00 0xc420093a18] [0x9728b0 0x9728b0] 0xc420eccb40 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.42.168 --kubeconfig=/workspace/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-xxrtk] []  <nil>  Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout
     [] <nil> 0xc420ec54d0 exit status 1 <nil> <nil> true [0xc4200939f0 0xc420093a08 0xc420093a20] [0xc4200939f0 0xc420093a08 0xc420093a20] [0xc420093a00 0xc420093a18] [0x9728b0 0x9728b0] 0xc420eccb40 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2067

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:914
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.42.168 --kubeconfig=/workspace/.kube/config log redis-master-pnrz1 redis-master --namespace=e2e-tests-kubectl-103wh --tail=1] []  <nil>  Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout\n [] <nil> 0xc420dfbaa0 exit status 1 <nil> <nil> true [0xc42020aa48 0xc42020aa80 0xc42020aac0] [0xc42020aa48 0xc42020aa80 0xc42020aac0] [0xc42020aa68 0xc42020aab8] [0x9728b0 0x9728b0] 0xc420cebe00 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.42.168 --kubeconfig=/workspace/.kube/config log redis-master-pnrz1 redis-master --namespace=e2e-tests-kubectl-103wh --tail=1] []  <nil>  Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout
     [] <nil> 0xc420dfbaa0 exit status 1 <nil> <nil> true [0xc42020aa48 0xc42020aa80 0xc42020aac0] [0xc42020aa48 0xc42020aa80 0xc42020aac0] [0xc42020aa68 0xc42020aab8] [0x9728b0 0x9728b0] 0xc420cebe00 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2067

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1139
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.42.168 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx-slim:0.7 --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-xkt5k] []  <nil>  Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout\n [] <nil> 0xc420c90a50 exit status 1 <nil> <nil> true [0xc42032c318 0xc42032c330 0xc42032c348] [0xc42032c318 0xc42032c330 0xc42032c348] [0xc42032c328 0xc42032c340] [0x9728b0 0x9728b0] 0xc420c3ecc0 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.185.42.168 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx-slim:0.7 --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-xkt5k] []  <nil>  Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout
     [] <nil> 0xc420c90a50 exit status 1 <nil> <nil> true [0xc42032c318 0xc42032c330 0xc42032c348] [0xc42032c318 0xc42032c330 0xc42032c348] [0xc42032c328 0xc42032c340] [0x9728b0 0x9728b0] 0xc420c3ecc0 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 35.185.42.168:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2067

Issues about this test specifically: #27532 #34567

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/5953/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:629
Jan 30 17:26:00.759: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:626

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203d1630>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc420979c50>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-30 17:27:55 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-30 17:28:26 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-30 17:27:55 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.4 PodIP:10.48.0.45 StartTime:2017-01-30 17:27:55 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4209c0af0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://31e642132a43d03a0f1bcf79588f4b71fc44f0ec4e821a113c8a694f0a093c71}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-30 17:27:55 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-30 17:28:26 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-30 17:27:55 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.4 PodIP:10.48.0.45 StartTime:2017-01-30 17:27:55 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4209c0af0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://31e642132a43d03a0f1bcf79588f4b71fc44f0ec4e821a113c8a694f0a093c71}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc42047bcb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 30 17:37:02.699: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1580

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/5967/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203ce990>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.203.30 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-jwd3n] []  0xc420afb140 Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc420b75e00 exit status 1 <nil> <nil> true [0xc42036da40 0xc42036da68 0xc42036da78] [0xc42036da40 0xc42036da68 0xc42036da78] [0xc42036da48 0xc42036da60 0xc42036da70] [0x972560 0x972660 0x972660] 0xc420b77020 <nil>}:\nCommand stdout:\nCreated update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.196.203.30 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-jwd3n] []  0xc420afb140 Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc420b75e00 exit status 1 <nil> <nil> true [0xc42036da40 0xc42036da68 0xc42036da78] [0xc42036da40 0xc42036da68 0xc42036da78] [0xc42036da48 0xc42036da60 0xc42036da70] [0x972560 0x972660 0x972660] 0xc420b77020 <nil>}:
    Command stdout:
    Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2067

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Jan 30 23:04:16.927: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:148
starting pod liveness-exec in namespace e2e-tests-container-probe-kn745
Expected error:
    <*errors.errorString | 0xc42043cad0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:364

Issues about this test specifically: #37914

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/6393/
Multiple broken tests:

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc4204859b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
wait for pod "pod-6f13a89d-ece2-11e6-b6b7-0242ac11000b" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203ec060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #37500

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Feb  6 19:13:35.011: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Feb  6 19:16:53.468: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:56
wait for pod "pod-secrets-5403f8bd-ece2-11e6-a070-0242ac11000b" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203ac7f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #37529

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:437
Expected error:
    <*errors.errorString | 0xc4204301a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #33985

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203ab4e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124
Expected error:
    <*errors.errorString | 0xc420450260>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc420413660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

@calebamiles calebamiles modified the milestone: v1.6 Mar 3, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod-smoke/9555/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Mar  7 05:11:16.278: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Mar  7 05:10:32.156: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Mar  7 05:10:29.625: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

@grodrigues3 grodrigues3 added sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/network Categorizes an issue or PR as relevant to SIG Network. labels Mar 11, 2017
@ethernetdan
Copy link
Contributor

Seems to have been an ephemeral issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests

5 participants