Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-kops-aws: broken test run #43206

Closed
k8s-github-robot opened this issue Mar 16, 2017 · 15 comments
Closed

ci-kubernetes-e2e-kops-aws: broken test run #43206

k8s-github-robot opened this issue Mar 16, 2017 · 15 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. milestone/removed priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5460/
Multiple broken tests:

Failed: [k8s.io] Projected should provide container's memory limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:935
wait for pod "downwardapi-volume-223fae58-0a18-11e7-8541-0242ac110004" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4204589e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:208
Expected error:
    <*errors.errorString | 0xc4203f2a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:200

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:131
Mar 16 00:15:00.677: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-c2r5t to expose endpoints map[pod1:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc42047a9a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:508
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg191257083 --namespace=e2e-tests-kubectl-mljcm run -i --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure failure-2 -- /bin/sh -c cat && exit 42] []  0xc4212d3500  error: watch closed before Until timeout\n [] <nil> 0xc420d9ab70 exit status 1 <nil> <nil> true [0xc42003dd40 0xc42003dd68 0xc42003dd78] [0xc42003dd40 0xc42003dd68 0xc42003dd78] [0xc42003dd48 0xc42003dd60 0xc42003dd70] [0xc34f00 0xc35000 0xc35000] 0xc420b290e0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: watch closed before Until timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg191257083 --namespace=e2e-tests-kubectl-mljcm run -i --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure failure-2 -- /bin/sh -c cat && exit 42] []  0xc4212d3500  error: watch closed before Until timeout
     [] <nil> 0xc420d9ab70 exit status 1 <nil> <nil> true [0xc42003dd40 0xc42003dd68 0xc42003dd78] [0xc42003dd40 0xc42003dd68 0xc42003dd78] [0xc42003dd48 0xc42003dd60 0xc42003dd70] [0xc34f00 0xc35000 0xc35000] 0xc420b290e0 <nil>}:
    Command stdout:
    
    stderr:
    error: watch closed before Until timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:494

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:190
wait for pod "downwardapi-volume-32f97f2b-0a18-11e7-be91-0242ac110004" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203d9f70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:373
Expected error:
    <*errors.errorString | 0xc421044100>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1673

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc420416ec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc420376260>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:156
Timed out after 300.001s.
Expected
    <string>: content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    
to contain substring
    <string>: value-2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:155

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc42043eb10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420445910>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32375

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 16 00:25:21.475: Couldn't delete ns: "e2e-tests-disruption-8tmbv": namespace e2e-tests-disruption-8tmbv was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-8tmbv was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:411
Expected error:
    <*errors.errorString | 0xc420444520>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:247

Issues about this test specifically: #26168 #27450 #43094

Previous issues for this suite: #37891 #42334

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Mar 16, 2017
@calebamiles calebamiles modified the milestone: v1.6 Mar 16, 2017
@marun
Copy link
Contributor

marun commented Mar 17, 2017

@justinsb Should this be blocking 1.6?

@fejta
Copy link
Contributor

fejta commented Mar 19, 2017

/unassign
/assign @justinsb @zmerlynn

@k8s-ci-robot k8s-ci-robot assigned justinsb and zmerlynn and unassigned fejta Mar 19, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5621/
Multiple broken tests:

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:85
Expected error:
    <*errors.errorString | 0xc4203d7f30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:84

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203d8300>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32375

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:75
Waiting for pods in namespace "e2e-tests-disruption-56616" to be ready
Expected error:
    <*errors.errorString | 0xc42035d140>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:374
Expected error:
    <*errors.errorString | 0xc42179ae00>: {
        s: "Only 43 pods started out of 50",
    }
    Only 43 pods started out of 50
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:346

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Projected should provide container's memory limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:935
wait for pod "downwardapi-volume-89bb3515-0d73-11e7-b490-0242ac110008" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203cebd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4204509c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:80
Expected error:
    <*errors.errorString | 0xc420aba670>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:430

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203d8300>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203b3c70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:108
Expected error:
    <*errors.errorString | 0xc4203fecb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:95

Issues about this test specifically: #28003

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:508
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg224149404 --namespace=e2e-tests-kubectl-5vhq4 run -i --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --rm failure-3 -- /bin/sh -c cat && exit 42] []  0xc420a6e4c0  error: watch closed before Until timeout\n [] <nil> 0xc421234e70 exit status 1 <nil> <nil> true [0xc420345b60 0xc420345ba8 0xc420345bc0] [0xc420345b60 0xc420345ba8 0xc420345bc0] [0xc420345b68 0xc420345b90 0xc420345bb0] [0xc37750 0xc37850 0xc37850] 0xc421244d80 <nil>}:\nCommand stdout:\n\nstderr:\nerror: watch closed before Until timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg224149404 --namespace=e2e-tests-kubectl-5vhq4 run -i --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --rm failure-3 -- /bin/sh -c cat && exit 42] []  0xc420a6e4c0  error: watch closed before Until timeout
     [] <nil> 0xc421234e70 exit status 1 <nil> <nil> true [0xc420345b60 0xc420345ba8 0xc420345bc0] [0xc420345b60 0xc420345ba8 0xc420345bc0] [0xc420345b68 0xc420345b90 0xc420345bb0] [0xc37750 0xc37850 0xc37850] 0xc421244d80 <nil>}:
    Command stdout:
    
    stderr:
    error: watch closed before Until timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:500

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:401
wait for pod "pod-projected-configmaps-84c30ba8-0d73-11e7-8636-0242ac110008" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203557e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203b3220>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 20 06:52:44.683: Couldn't delete ns: "e2e-tests-disruption-2mcc3": namespace e2e-tests-disruption-2mcc3 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-2mcc3 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #32668 #35405

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5637/
Multiple broken tests:

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:194
waiting for tester pod to start
Expected error:
    <*errors.errorString | 0xc420380c80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:115

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203adf10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32375

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc420352720>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:513
Mar 20 18:11:44.637: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:385

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1154
Expected error:
    <*errors.errorString | 0xc420453d40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3937

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:512
Expected error:
    <*errors.errorString | 0xc420441ef0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:276

Issues about this test specifically: #32584

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203d8130>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203b03d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:266
Expected error:
    <*errors.errorString | 0xc420353560>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3937

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:206
Expected error:
    <*errors.errorString | 0xc4203fefe0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Issues about this test specifically: #36288 #36913

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5668/
Multiple broken tests:

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Expected error:
    <*errors.errorString | 0xc420e89bc0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:16, ReadyReplicas:18, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63625732243, nsec:0, loc:(*time.Location)(0x4992c40)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63625732243, nsec:0, loc:(*time.Location)(0x4992c40)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:16, ReadyReplicas:18, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63625732243, nsec:0, loc:(*time.Location)(0x4992c40)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63625732243, nsec:0, loc:(*time.Location)(0x4992c40)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1007

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:65
Expected error:
    <*errors.errorString | 0xc4211c94a0>: {
        s: "expected pod \"client-containers-04d8e29a-0e85-11e7-a7a4-0242ac11000b\" success: <nil>",
    }
    expected pod "client-containers-04d8e29a-0e85-11e7-a7a4-0242ac11000b" success: <nil>
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2197

Issues about this test specifically: #29467

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:499
Mar 21 15:28:21.337: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:270

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:243
Expected error:
    <*errors.errorString | 0xc4212b76a0>: {
        s: "expected pod \"\" success: <nil>",
    }
    expected pod "" success: <nil>
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2197

Issues about this test specifically: #37526

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:283
Mar 21 15:32:50.429: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:303

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:502
Mar 21 15:27:57.672: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:315

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:75
Waiting for pods in namespace "e2e-tests-disruption-8xzl1" to be ready
Expected error:
    <*errors.errorString | 0xc4203bc510>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:331
Mar 21 15:34:26.482: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2015

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:266
Expected error:
    <*errors.errorString | 0xc4203fe700>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3937

Issues about this test specifically: #31085 #34207 #37097

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5719/
Multiple broken tests:

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:374
Expected error:
    <*errors.errorString | 0xc4214aa820>: {
        s: "Only 46 pods started out of 50",
    }
    Only 46 pods started out of 50
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:346

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc420417dc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc42045b030>: {
        s: "error waiting for deployment \"test-recreate-deployment\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63625834460, nsec:0, loc:(*time.Location)(0x4994c40)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63625834460, nsec:0, loc:(*time.Location)(0x4994c40)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}}",
    }
    error waiting for deployment "test-recreate-deployment" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63625834460, nsec:0, loc:(*time.Location)(0x4994c40)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63625834460, nsec:0, loc:(*time.Location)(0x4994c40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:322

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203d37d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:56
wait for pod "pod-projected-secrets-6c86351f-0f73-11e7-8d7d-0242ac110002" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42041b0e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:274
Expected error:
    <*errors.errorString | 0xc420445d40>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:150

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:59
wait for pod "downwardapi-volume-028d5ef1-0f74-11e7-8a5b-0242ac110002" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203d86e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:41
Expected error:
    <*errors.errorString | 0xc4203d81d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:464
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg301993520 --namespace=e2e-tests-kubectl-kgw16 exec nginx echo running in container] [KOPS_LATEST=latest-ci-updown-green.txt ZONE=us-west-1c JENKINS_AWS_SSH_PUBLIC_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws@tmp/secretFiles/9ce8dc91-1304-4708-bba2-c46c1cec3577/kube_aws_rsa.pub.txt BUILD_URL=http://goto.google.com/k8s-test/job/ci-kubernetes-e2e-kops-aws/5719/ GOLANG_VERSION=1.6.3 HOSTNAME=2aac0bfc84ab ROOT_BUILD_CAUSE_TIMERTRIGGER=true CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true TERM=xterm SHELL=/bin/bash HUDSON_SERVER_COOKIE=02143c9ae5889f5c KUBEKINS_TIMEOUT=120m KUBERNETES_RELEASE=v1.7.0-alpha.0.1462+c415325cedf59a KOPS_DEPLOY_LATEST_KUBE=y KUBE_GCE_INSTANCE_PREFIX=jenkins-e2e JENKINS_AWS_SSH_PRIVATE_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws@tmp/secretFiles/69136105-3354-49cb-a7b7-b554ae86c99b/kube_aws_rsa.txt SSH_CLIENT=10.240.0.37 33976 22 LOG_DUMP_SAVE_SERVICES=protokube KUBE_CONFIG_FILE=config-test.sh BUILD_TAG=jenkins-ci-kubernetes-e2e-kops-aws-5719 GOOGLE_APPLICATION_CREDENTIALS=/service-account.json LOG_DUMP_SAVE_LOGS=cloud-init-output E2E_UP=true LOG_DUMP_SSH_KEY=/workspace/.ssh/kube_aws_rsa KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.6.0-alpha.0+a300b5f ROOT_BUILD_CAUSE=TIMERTRIGGER CLOUDSDK_EXPERIMENTAL_FAST_COMPONENT_UPDATE=false KOPS_RUN_OBSOLETE_VERSION=true JOB_URL=http://goto.google.com/k8s-test/job/ci-kubernetes-e2e-kops-aws/ WORKSPACE=/workspace JENKINS_AWS_CREDENTIALS_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws@tmp/secretFiles/ddc2a600-beac-4a2c-8e6a-3cd40ae21e10/KubernetesPostsubmitTests.txt KUBERNETES_RELEASE_URL=https://storage.googleapis.com/kubernetes-release-dev/ci USER=jenkins KUBE_GCE_NETWORK=jenkins-e2e CLOUDSDK_CONFIG=/workspace/.config/gcloud GINKGO_TOLERATE_FLAKES=y KUBECONFIG=/tmp/kops-kubecfg301993520 KUBERNETES_CONFORMANCE_PROVIDER=aws E2E_REPORT_DIR=/workspace/_artifacts INSTANCE_PREFIX=jenkins-e2e JENKINS_GCE_SSH_PRIVATE_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws@tmp/secretFiles/e9306ee4-559f-4f7c-8040-4a78a5b85151/google_compute_engine.txt AWS_PROFILE=default GINKGO_TEST_ARGS=--ginkgo.skip=\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\]|\\[HPA\\]|Dashboard|Services.*functioning.*NodePort KUBE_RUNTIME_CONFIG=batch/v2alpha1=true CLOUDSDK_CORE_DISABLE_PROMPTS=1 NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat JENKINS_HOME=/var/lib/jenkins MAIL=/var/mail/jenkins PATH=/workspace/kubernetes/platforms/linux/amd64:/workspace/kubernetes/platforms/linux/amd64://google-cloud-sdk/bin:/google-cloud-sdk/bin:/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LOG_DUMP_SSH_USER=admin E2E_OPT=--deployment kops --kops /workspace/kops --kops-cluster e2e-kops-aws.test-aws.k8s.io --kops-zones us-west-1c --kops-state s3://k8s-kops-jenkins/ --kops-nodes=4 --kops-ssh-key=/workspace/.ssh/kube_aws_rsa --kops-kubernetes-version https://storage.googleapis.com/kubernetes-release-dev/ci/v1.7.0-alpha.0.1462+c415325cedf59a --kops-admin-access 104.154.241.197/32 PWD=/workspace/kubernetes HUDSON_URL=http://goto.google.com/k8s-test/ AWS_DEFAULT_PROFILE=default LANG=en_US.UTF-8 E2E_TEST=true JOB_NAME=ci-kubernetes-e2e-kops-aws KUBECTL=./cluster/kubectl.sh --match-server-version BUILD_DISPLAY_NAME=#5719 XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt BUILD_ID=5719 JENKINS_URL=http://goto.google.com/k8s-test/ BUILD_CAUSE=TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER KUBERNETES_PROVIDER=aws GOLANG_DOWNLOAD_SHA256=cdde5e08530c0579255d6153b08fdb3b8e47caabbe717bc7bcd7561275a87aeb KUBE_SSH_USER=admin JOB_BASE_NAME=ci-kubernetes-e2e-kops-aws KOPS_STATE_STORE=s3://k8s-kops-jenkins/ SHLVL=5 HOME=/workspace CLUSTER_API_VERSION=1.7.0-alpha.0.1462+c415325cedf59a BOOTSTRAP_MIGRATION=yes no_proxy=127.0.0.1,localhost JENKINS_SERVER_COOKIE=02143c9ae5889f5c EXECUTOR_NUMBER=1 PRIORITY_PATH=/workspace/kubernetes/platforms/linux/amd64 KUBE_GCE_ZONE=us-central1-f GIT_TRACE=1 JENKINS_GCE_SSH_PUBLIC_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws@tmp/secretFiles/ec34a7d3-ded9-4374-a848-8490953e5931/google_compute_engine.pub.txt NODE_LABELS=agent-light-21 e2e node LOGNAME=jenkins GINKGO_PARALLEL=y SSH_CONNECTION=10.240.0.37 33976 10.240.0.19 22 HUDSON_HOME=/var/lib/jenkins NODE_NAME=agent-light-21 BUILD_CAUSE_TIMERTRIGGER=true GOPATH=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws/go AWS_SSH_KEY=/workspace/.ssh/kube_aws_rsa E2E_PUBLISH_PATH= BUILD_NUMBER=5719 HUDSON_COOKIE=257fa1d3-4b96-4887-aa9a-06c98bcfb9e9 FAIL_ON_GCP_RESOURCE_LEAK=true KUBERNETES_CONFORMANCE_TEST=yes E2E_DOWN=true KOPS_PUBLISH_GREEN_PATH=gs://kops-ci/bin/latest-ci-green.txt GOLANG_DOWNLOAD_URL=https://golang.org/dl/go1.6.3.linux-amd64.tar.gz KOPS_REGIONS=us-west-1 BASH_FUNC_log_dump_custom_get_instances%%=() {  local -r role=$1;\n local kops_regions;\n IFS=', ' read -r -a kops_regions <<< \"${KOPS_REGIONS:-us-west-2}\";\n for region in \"${kops_regions[@]}\";\n do\n aws ec2 describe-instances --region \"${region}\" --filter \"Name=tag:KubernetesCluster,Values=$(kubectl config current-context)\" \"Name=tag:k8s.io/role/${role},Values=1\" \"Name=instance-state-name,Values=running\" --query \"Reservations[].Instances[].PublicDnsName\" --output text;\n done\n} _=/workspace/kubernetes/platforms/linux/amd64/ginkgo https_proxy=http://127.0.0.1:54075]  <nil>  Error from server: \n [] <nil> 0xc4207c5a70 exit status 1 <nil> <nil> true [0xc4203ee658 0xc4203ee670 0xc4203ee688] [0xc4203ee658 0xc4203ee670 0xc4203ee688] [0xc4203ee668 0xc4203ee680] [0xc3cda0 0xc3cda0] 0xc420f704e0 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: \n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg301993520 --namespace=e2e-tests-kubectl-kgw16 exec nginx echo running in container] [KOPS_LATEST=latest-ci-updown-green.txt ZONE=us-west-1c JENKINS_AWS_SSH_PUBLIC_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws@tmp/secretFiles/9ce8dc91-1304-4708-bba2-c46c1cec3577/kube_aws_rsa.pub.txt BUILD_URL=http://goto.google.com/k8s-test/job/ci-kubernetes-e2e-kops-aws/5719/ GOLANG_VERSION=1.6.3 HOSTNAME=2aac0bfc84ab ROOT_BUILD_CAUSE_TIMERTRIGGER=true CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true TERM=xterm SHELL=/bin/bash HUDSON_SERVER_COOKIE=02143c9ae5889f5c KUBEKINS_TIMEOUT=120m KUBERNETES_RELEASE=v1.7.0-alpha.0.1462+c415325cedf59a KOPS_DEPLOY_LATEST_KUBE=y KUBE_GCE_INSTANCE_PREFIX=jenkins-e2e JENKINS_AWS_SSH_PRIVATE_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws@tmp/secretFiles/69136105-3354-49cb-a7b7-b554ae86c99b/kube_aws_rsa.txt SSH_CLIENT=10.240.0.37 33976 22 LOG_DUMP_SAVE_SERVICES=protokube KUBE_CONFIG_FILE=config-test.sh BUILD_TAG=jenkins-ci-kubernetes-e2e-kops-aws-5719 GOOGLE_APPLICATION_CREDENTIALS=/service-account.json LOG_DUMP_SAVE_LOGS=cloud-init-output E2E_UP=true LOG_DUMP_SSH_KEY=/workspace/.ssh/kube_aws_rsa KOPS_BASE_URL=https://storage.googleapis.com/kops-ci/bin/1.6.0-alpha.0+a300b5f ROOT_BUILD_CAUSE=TIMERTRIGGER CLOUDSDK_EXPERIMENTAL_FAST_COMPONENT_UPDATE=false KOPS_RUN_OBSOLETE_VERSION=true JOB_URL=http://goto.google.com/k8s-test/job/ci-kubernetes-e2e-kops-aws/ WORKSPACE=/workspace JENKINS_AWS_CREDENTIALS_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws@tmp/secretFiles/ddc2a600-beac-4a2c-8e6a-3cd40ae21e10/KubernetesPostsubmitTests.txt KUBERNETES_RELEASE_URL=https://storage.googleapis.com/kubernetes-release-dev/ci USER=jenkins KUBE_GCE_NETWORK=jenkins-e2e CLOUDSDK_CONFIG=/workspace/.config/gcloud GINKGO_TOLERATE_FLAKES=y KUBECONFIG=/tmp/kops-kubecfg301993520 KUBERNETES_CONFORMANCE_PROVIDER=aws E2E_REPORT_DIR=/workspace/_artifacts INSTANCE_PREFIX=jenkins-e2e JENKINS_GCE_SSH_PRIVATE_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws@tmp/secretFiles/e9306ee4-559f-4f7c-8040-4a78a5b85151/google_compute_engine.txt AWS_PROFILE=default GINKGO_TEST_ARGS=--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort KUBE_RUNTIME_CONFIG=batch/v2alpha1=true CLOUDSDK_CORE_DISABLE_PROMPTS=1 NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat JENKINS_HOME=/var/lib/jenkins MAIL=/var/mail/jenkins PATH=/workspace/kubernetes/platforms/linux/amd64:/workspace/kubernetes/platforms/linux/amd64://google-cloud-sdk/bin:/google-cloud-sdk/bin:/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LOG_DUMP_SSH_USER=admin E2E_OPT=--deployment kops --kops /workspace/kops --kops-cluster e2e-kops-aws.test-aws.k8s.io --kops-zones us-west-1c --kops-state s3://k8s-kops-jenkins/ --kops-nodes=4 --kops-ssh-key=/workspace/.ssh/kube_aws_rsa --kops-kubernetes-version https://storage.googleapis.com/kubernetes-release-dev/ci/v1.7.0-alpha.0.1462+c415325cedf59a --kops-admin-access 104.154.241.197/32 PWD=/workspace/kubernetes HUDSON_URL=http://goto.google.com/k8s-test/ AWS_DEFAULT_PROFILE=default LANG=en_US.UTF-8 E2E_TEST=true JOB_NAME=ci-kubernetes-e2e-kops-aws KUBECTL=./cluster/kubectl.sh --match-server-version BUILD_DISPLAY_NAME=#5719 XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt BUILD_ID=5719 JENKINS_URL=http://goto.google.com/k8s-test/ BUILD_CAUSE=TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER KUBERNETES_PROVIDER=aws GOLANG_DOWNLOAD_SHA256=cdde5e08530c0579255d6153b08fdb3b8e47caabbe717bc7bcd7561275a87aeb KUBE_SSH_USER=admin JOB_BASE_NAME=ci-kubernetes-e2e-kops-aws KOPS_STATE_STORE=s3://k8s-kops-jenkins/ SHLVL=5 HOME=/workspace CLUSTER_API_VERSION=1.7.0-alpha.0.1462+c415325cedf59a BOOTSTRAP_MIGRATION=yes no_proxy=127.0.0.1,localhost JENKINS_SERVER_COOKIE=02143c9ae5889f5c EXECUTOR_NUMBER=1 PRIORITY_PATH=/workspace/kubernetes/platforms/linux/amd64 KUBE_GCE_ZONE=us-central1-f GIT_TRACE=1 JENKINS_GCE_SSH_PUBLIC_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws@tmp/secretFiles/ec34a7d3-ded9-4374-a848-8490953e5931/google_compute_engine.pub.txt NODE_LABELS=agent-light-21 e2e node LOGNAME=jenkins GINKGO_PARALLEL=y SSH_CONNECTION=10.240.0.37 33976 10.240.0.19 22 HUDSON_HOME=/var/lib/jenkins NODE_NAME=agent-light-21 BUILD_CAUSE_TIMERTRIGGER=true GOPATH=/var/lib/jenkins/workspace/ci-kubernetes-e2e-kops-aws/go AWS_SSH_KEY=/workspace/.ssh/kube_aws_rsa E2E_PUBLISH_PATH= BUILD_NUMBER=5719 HUDSON_COOKIE=257fa1d3-4b96-4887-aa9a-06c98bcfb9e9 FAIL_ON_GCP_RESOURCE_LEAK=true KUBERNETES_CONFORMANCE_TEST=yes E2E_DOWN=true KOPS_PUBLISH_GREEN_PATH=gs://kops-ci/bin/latest-ci-green.txt GOLANG_DOWNLOAD_URL=https://golang.org/dl/go1.6.3.linux-amd64.tar.gz KOPS_REGIONS=us-west-1 BASH_FUNC_log_dump_custom_get_instances%%=() {  local -r role=$1;
     local kops_regions;
     IFS=', ' read -r -a kops_regions <<< "${KOPS_REGIONS:-us-west-2}";
     for region in "${kops_regions[@]}";
     do
     aws ec2 describe-instances --region "${region}" --filter "Name=tag:KubernetesCluster,Values=$(kubectl config current-context)" "Name=tag:k8s.io/role/${role},Values=1" "Name=instance-state-name,Values=running" --query "Reservations[].Instances[].PublicDnsName" --output text;
     done
    } _=/workspace/kubernetes/platforms/linux/amd64/ginkgo https_proxy=http://127.0.0.1:54075]  <nil>  Error from server: 
     [] <nil> 0xc4207c5a70 exit status 1 <nil> <nil> true [0xc4203ee658 0xc4203ee670 0xc4203ee688] [0xc4203ee658 0xc4203ee670 0xc4203ee688] [0xc4203ee668 0xc4203ee680] [0xc3cda0 0xc3cda0] 0xc420f704e0 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: 
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2097

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:508
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg301993520 --namespace=e2e-tests-kubectl-1dnv0 run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never success -- /bin/sh -c exit 0] []  <nil>  error: watch closed before Until timeout\n [] <nil> 0xc420664210 exit status 1 <nil> <nil> true [0xc420d1a338 0xc420d1a350 0xc420d1a368] [0xc420d1a338 0xc420d1a350 0xc420d1a368] [0xc420d1a348 0xc420d1a360] [0xc3cda0 0xc3cda0] 0xc420d9e2a0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: watch closed before Until timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg301993520 --namespace=e2e-tests-kubectl-1dnv0 run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never success -- /bin/sh -c exit 0] []  <nil>  error: watch closed before Until timeout
     [] <nil> 0xc420664210 exit status 1 <nil> <nil> true [0xc420d1a338 0xc420d1a350 0xc420d1a368] [0xc420d1a338 0xc420d1a350 0xc420d1a368] [0xc420d1a348 0xc420d1a360] [0xc3cda0 0xc3cda0] 0xc420d9e2a0 <nil>}:
    Command stdout:
    
    stderr:
    error: watch closed before Until timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:482

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc42034fc70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] DNS configMap nameserver should be able to change stubDomain configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:195
Expected error:
    <*errors.errorString | 0xc420401b30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_common.go:268

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 22 19:56:28.384: Couldn't delete ns: "e2e-tests-disruption-djlrb": namespace e2e-tests-disruption-djlrb was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-djlrb was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:49
wait for pod "downwardapi-volume-ec596f3b-0f75-11e7-a245-0242ac110002" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420401b30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc420409fa0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:384
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:383

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203f3e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32375

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/5961/
Multiple broken tests:

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:374
Expected error:
    <*errors.errorString | 0xc4217210d0>: {
        s: "Only 41 pods started out of 50",
    }
    Only 41 pods started out of 50
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:346

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:163
wait for pod "downwardapi-volume-02be0730-1369-11e7-92e4-0242ac110002" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203f1da0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc42043ee20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:55
wait for pod "client-containers-6f000d78-1368-11e7-9341-0242ac110002" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420440540>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Issues about this test specifically: #29994

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:499
Mar 27 20:51:22.103: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:270

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42043ee20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32375

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:331
Expected
    <*errors.errorString | 0xc420417da0>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:313

Issues about this test specifically: #31408

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:80
Expected error:
    <*errors.errorString | 0xc421028150>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:430

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:510
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg430553858 --namespace=e2e-tests-kubectl-jg6mk run -i --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure failure-2 -- /bin/sh -c cat && exit 42] []  0xc4211b23e0  error: watch closed before Until timeout\n [] <nil> 0xc421283350 exit status 1 <nil> <nil> true [0xc421138398 0xc4211383c0 0xc4211383d0] [0xc421138398 0xc4211383c0 0xc4211383d0] [0xc4211383a0 0xc4211383b8 0xc4211383c8] [0xc2aa00 0xc2ab00 0xc2ab00] 0xc4215a2420 <nil>}:\nCommand stdout:\n\nstderr:\nerror: watch closed before Until timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg430553858 --namespace=e2e-tests-kubectl-jg6mk run -i --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure failure-2 -- /bin/sh -c cat && exit 42] []  0xc4211b23e0  error: watch closed before Until timeout
     [] <nil> 0xc421283350 exit status 1 <nil> <nil> true [0xc421138398 0xc4211383c0 0xc4211383d0] [0xc421138398 0xc4211383c0 0xc4211383d0] [0xc4211383a0 0xc4211383b8 0xc4211383c8] [0xc2aa00 0xc2ab00 0xc2ab00] 0xc4215a2420 <nil>}:
    Command stdout:
    
    stderr:
    error: watch closed before Until timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:496

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203f1270>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:243
wait for pod "pod-service-account-d8cc2b2e-1368-11e7-9a0d-0242ac110002-ntdhp" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203acd20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Issues about this test specifically: #37526

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Expected error:
    <*errors.errorString | 0xc42101f310>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:954

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:52
wait for pod "pod-projected-secrets-70e94e0d-1368-11e7-8809-0242ac110002" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420417670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:44
wait for pod "client-containers-584f7802-1369-11e7-b466-0242ac110002" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4204599f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Issues about this test specifically: #36706

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:39
wait for pod "pod-configmaps-8cb154c9-1368-11e7-a63b-0242ac110002" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203ae520>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/6017/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:125
Expected error:
    <*errors.errorString | 0xc42037e170>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Issues about this test specifically: #43335

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc420d92000>: {
        s: "error waiting for deployment \"test-recreate-deployment\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63626371056, nsec:0, loc:(*time.Location)(0x49b0de0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63626371056, nsec:0, loc:(*time.Location)(0x49b0de0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}}",
    }
    error waiting for deployment "test-recreate-deployment" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63626371056, nsec:0, loc:(*time.Location)(0x49b0de0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63626371056, nsec:0, loc:(*time.Location)(0x49b0de0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:322

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:206
Expected error:
    <*errors.errorString | 0xc42041b0a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:375
Expected error:
    <*errors.errorString | 0xc4213c6070>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1718

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:177
starting pod liveness-http in namespace e2e-tests-container-probe-1nwzk
Expected error:
    <*errors.errorString | 0xc4203d8360>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:365

Issues about this test specifically: #38511

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 29 01:02:28.780: Couldn't delete ns: "e2e-tests-kubectl-vrz2x": namespace e2e-tests-kubectl-vrz2x was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-vrz2x was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #27524 #32057

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:506
Mar 29 01:24:55.766: Pod test-pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:461

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:510
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg433288097 --namespace=e2e-tests-kubectl-zrmkk exec nginx -- /bin/sh -c exit 0] []  <nil>  Error from server: \n [] <nil> 0xc420864450 exit status 1 <nil> <nil> true [0xc421138a58 0xc421138a70 0xc421138a88] [0xc421138a58 0xc421138a70 0xc421138a88] [0xc421138a68 0xc421138a80] [0x8d0700 0x8d0700] 0xc421406780 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: \n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg433288097 --namespace=e2e-tests-kubectl-zrmkk exec nginx -- /bin/sh -c exit 0] []  <nil>  Error from server: 
     [] <nil> 0xc420864450 exit status 1 <nil> <nil> true [0xc421138a58 0xc421138a70 0xc421138a88] [0xc421138a58 0xc421138a70 0xc421138a88] [0xc421138a68 0xc421138a80] [0x8d0700 0x8d0700] 0xc421406780 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: 
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:474

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:564
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg433288097 --namespace=e2e-tests-kubectl-m3zzl run run-test-2 --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --leave-stdin-open=true -- sh -c cat && echo 'stdin closed'] []  0xc42125b960  error: watch closed before Until timeout\n [] <nil> 0xc4215fe540 exit status 1 <nil> <nil> true [0xc4213f0258 0xc4213f0280 0xc4213f0290] [0xc4213f0258 0xc4213f0280 0xc4213f0290] [0xc4213f0260 0xc4213f0278 0xc4213f0288] [0x8d0600 0x8d0700 0x8d0700] 0xc4213dd680 <nil>}:\nCommand stdout:\n\nstderr:\nerror: watch closed before Until timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg433288097 --namespace=e2e-tests-kubectl-m3zzl run run-test-2 --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --leave-stdin-open=true -- sh -c cat && echo 'stdin closed'] []  0xc42125b960  error: watch closed before Until timeout
     [] <nil> 0xc4215fe540 exit status 1 <nil> <nil> true [0xc4213f0258 0xc4213f0280 0xc4213f0290] [0xc4213f0258 0xc4213f0280 0xc4213f0290] [0xc4213f0260 0xc4213f0278 0xc4213f0288] [0x8d0600 0x8d0700 0x8d0700] 0xc4213dd680 <nil>}:
    Command stdout:
    
    stderr:
    error: watch closed before Until timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2083

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:122
Mar 29 00:59:10.005: pod e2e-tests-container-probe-xt0z0/liveness-exec - expected number of restarts: 1, found restarts: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:404

Issues about this test specifically: #30264

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/6257/
Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:129
Expected error:
    <*errors.errorString | 0xc42119de60>: {
        s: "failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg234504859 exec --namespace=e2e-tests-statefulset-slcsf ss-2 -- /bin/sh -c ls -idlh /data] []  <nil>  Unable to connect to the server: dial tcp 54.219.183.232:443: i/o timeout\n [] <nil> 0xc420d9d500 exit status 1 <nil> <nil> true [0xc420eb0050 0xc420eb0068 0xc420eb0080] [0xc420eb0050 0xc420eb0068 0xc420eb0080] [0xc420eb0060 0xc420eb0078] [0x8da890 0x8da890] 0xc420d96780 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 54.219.183.232:443: i/o timeout\n\nerror:\nexit status 1\n",
    }
    failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.e2e-kops-aws.test-aws.k8s.io --kubeconfig=/tmp/kops-kubecfg234504859 exec --namespace=e2e-tests-statefulset-slcsf ss-2 -- /bin/sh -c ls -idlh /data] []  <nil>  Unable to connect to the server: dial tcp 54.219.183.232:443: i/o timeout
     [] <nil> 0xc420d9d500 exit status 1 <nil> <nil> true [0xc420eb0050 0xc420eb0068 0xc420eb0080] [0xc420eb0050 0xc420eb0068 0xc420eb0080] [0xc420eb0060 0xc420eb0078] [0x8da890 0x8da890] 0xc420d96780 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 54.219.183.232:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:124

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Apr  2 09:28:49.070: Couldn't delete ns: "e2e-tests-kubectl-6fq98": namespace e2e-tests-kubectl-6fq98 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-6fq98 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:61
wait for pod "pod-configmaps-b758aae9-17c0-11e7-86e3-0242ac110009" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4204178c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4204178c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:206
Expected error:
    <*errors.errorString | 0xc420459cb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Apr  2 09:27:45.533: Couldn't delete ns: "e2e-tests-disruption-tqlbg": namespace e2e-tests-disruption-tqlbg was not deleted with limit: timed out waiting for the condition, pods remaining: 2, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-tqlbg was not deleted with limit: timed out waiting for the condition, pods remaining: 2, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #32644

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:131
Apr  2 09:24:42.724: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-tfqxt to expose endpoints map[pod1:[80] pod2:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:374
Expected error:
    <*errors.errorString | 0xc421830d40>: {
        s: "Only 36 pods started out of 40",
    }
    Only 36 pods started out of 40
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:346

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4204534b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32830

Failed: [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:401
wait for pod "pod-projected-configmaps-b6a8d29f-17c0-11e7-a5ac-0242ac110009" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420406d10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply apply set/view last-applied {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Apr  2 09:28:47.403: Couldn't delete ns: "e2e-tests-kubectl-jr3wf": namespace e2e-tests-kubectl-jr3wf was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-jr3wf was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Apr  2 09:30:37.295: Couldn't delete ns: "e2e-tests-disruption-vf2kt": namespace e2e-tests-disruption-vf2kt was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-vf2kt was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc42043e260>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:118
Expected
    <*errors.errorString | 0xc420459cb0>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:96

Issues about this test specifically: #31936

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:85
Expected error:
    <*errors.errorString | 0xc4203af6a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Expected error:
    <*errors.errorString | 0xc4207e68f0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:954

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:91
Apr  2 09:42:58.427: Failed waiting for stateful set status.replicas updated to 0: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:381

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws/7099/
Multiple broken tests:

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 19:10:42.485: Couldn't delete ns: "e2e-tests-pods-8m42n": namespace e2e-tests-pods-8m42n was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-pods-8m42n was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:274

Issues about this test specifically: #38308

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Expected error:
    <*errors.errorString | 0xc4212445f0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:985

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 19:11:21.592: Couldn't delete ns: "e2e-tests-kubectl-b8fjj": namespace e2e-tests-kubectl-b8fjj was not deleted with limit: timed out waiting for the condition, pods remaining: 2, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-b8fjj was not deleted with limit: timed out waiting for the condition, pods remaining: 2, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:274

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:328
Expected error:
    <*errors.errorString | 0xc421180920>: {
        s: "error while waiting for pods gone service1: timed out waiting for the condition",
    }
    error while waiting for pods gone service1: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:307

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:194
waiting for server pod to start
Expected error:
    <*errors.errorString | 0xc4203a9f10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:70

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:117
Did not get a good sample size: []
Less than two runs succeeded; aborting.
Not all RC/pod/service trials succeeded: Only 0 pods started out of 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:88

Issues about this test specifically: #30632

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 19:12:47.541: Couldn't delete ns: "e2e-tests-disruption-408t5": namespace e2e-tests-disruption-408t5 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-408t5 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:274

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203a9030>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #32375

@verb
Copy link
Contributor

verb commented May 10, 2017

I think this is probably another instance of this: http://gcsweb.k8s.io/gcs/kubernetes-jenkins/pr-logs/pull/45571/pull-kubernetes-e2e-kops-aws/24056/

timeouts caused by redis, maybe?

@marun
Copy link
Contributor

marun commented Jun 14, 2017

I'm assuming this doesn't have to block v1.7. Feel free to move back if required.

@marun marun modified the milestones: v1.8, v1.7 Jun 14, 2017
@spiffxp
Copy link
Member

spiffxp commented Jun 19, 2017

/remove-priority P2
/priority backlog
(I'm not actually sure this is the right priority, just trying to remove the old priority/PN labels)

@k8s-github-robot
Copy link
Author

[MILESTONENOTIFIER] Milestone Removed

@justinsb @k8s-merge-robot @zmerlynn

Important:
This issue was missing labels required for the v1.8 milestone for more than 7 days:

kind: Must specify exactly one of [kind/bug, kind/cleanup, kind/feature].
priority: Must specify exactly one of [priority/critical-urgent, priority/important-longterm, priority/important-soon].

Removing it from the milestone.

Additional instructions available here The commands available for adding these labels are documented here

@k8s-github-robot k8s-github-robot removed this from the v1.8 milestone Sep 9, 2017
@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 90 days. Closing this Issue. Please reopen if you would like to work towards merging this change, if/when the Issue is ready for the next round of review.

cc @justinsb @k8s-merge-robot @zmerlynn

You can add 'keep-open' label to prevent this from happening again, or add a comment to keep it open another 90 days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. milestone/removed priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

10 participants