Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-staging-parallel: broken test run #28413

Closed
k8s-github-robot opened this issue Jul 2, 2016 · 46 comments
Closed

kubernetes-e2e-gke-staging-parallel: broken test run #28413

k8s-github-robot opened this issue Jul 2, 2016 · 46 comments
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/5043/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. labels Jul 2, 2016
@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Jul 2, 2016
@k8s-github-robot
Copy link
Author

@k8s-github-robot k8s-github-robot added priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jul 3, 2016
@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/5221/

Multiple broken tests:

Failed: PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Jul  5 15:44:14.057: Couldn't delete ns "e2e-tests-prestop-i9i4e": namespace e2e-tests-prestop-i9i4e was not deleted within limit: timed out waiting for the condition, pods remaining: [server]

Failed: Kubectl client Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:456
Expected error:
    <*errors.errorString | 0xc20835ffd0>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.75.216 --kubeconfig=/workspace/.kube/config log goproxy --namespace=e2e-tests-kubectl-cbs95] []  <nil>  Error from server: container \"goproxy\" in pod \"goproxy\" is waiting to start: ContainerCreating\n [] <nil> 0xc2084a4180 exit status 1 <nil> true [0xc2082aef50 0xc2082aef88 0xc2082aefb0] [0xc2082aef50 0xc2082aef88 0xc2082aefb0] [0xc2082aef80 0xc2082aefa8] [0x968c80 0x968c80] 0xc2082e9a40}:\nCommand stdout:\n\nstderr:\nError from server: container \"goproxy\" in pod \"goproxy\" is waiting to start: ContainerCreating\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.75.216 --kubeconfig=/workspace/.kube/config log goproxy --namespace=e2e-tests-kubectl-cbs95] []  <nil>  Error from server: container "goproxy" in pod "goproxy" is waiting to start: ContainerCreating
     [] <nil> 0xc2084a4180 exit status 1 <nil> true [0xc2082aef50 0xc2082aef88 0xc2082aefb0] [0xc2082aef50 0xc2082aef88 0xc2082aefb0] [0xc2082aef80 0xc2082aefa8] [0x968c80 0x968c80] 0xc2082e9a40}:
    Command stdout:

    stderr:
    Error from server: container "goproxy" in pod "goproxy" is waiting to start: ContainerCreating

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #28455

Failed: kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:156
Expected error:
    <*errors.errorString | 0xc2095022a0>: {
        s: "Only 20 pods started out of 30",
    }
    Only 20 pods started out of 30
not to have occurred

Failed: V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Jul  5 15:43:39.801: Couldn't delete ns "e2e-tests-v1job-k2fqj": namespace e2e-tests-v1job-k2fqj was not deleted within limit: timed out waiting for the condition, pods remaining: [scale-up-q91iq]

Failed: Kubectl client Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Jul  5 15:44:08.350: Couldn't delete ns "e2e-tests-kubectl-4o671": namespace e2e-tests-kubectl-4o671 was not deleted within limit: timed out waiting for the condition, pods remaining: [redis-master-yo2i3]

Failed: Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Jul  5 15:44:16.272: Couldn't delete ns "e2e-tests-kubectl-iwbru": namespace e2e-tests-kubectl-iwbru was not deleted within limit: timed out waiting for the condition, pods remaining: [redis-master-xm2j3]

Failed: Networking should function for intra-pod communication [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:213
Expected error:
    <*errors.errorString | 0xc20845b3b0>: {
        s: "gave up waiting for pod 'nettest-zjr9q' to be 'running' after 5m0s",
    }
    gave up waiting for pod 'nettest-zjr9q' to be 'running' after 5m0s
not to have occurred

Issues about this test specifically: #27369

Failed: Kubectl client Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:101
Jul  5 15:44:12.746: Couldn't delete ns "e2e-tests-kubectl-j6fhx": namespace e2e-tests-kubectl-j6fhx was not deleted within limit: timed out waiting for the condition, pods remaining: [frontend-1211764471-rrzrn redis-slave-1691881626-rf26l]

Failed: EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir.go:60
Expected error:
    <*errors.errorString | 0xc208509390>: {
        s: "gave up waiting for pod 'pod-46549cf4-4301-11e6-93e0-0242ac110007' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pod-46549cf4-4301-11e6-93e0-0242ac110007' to be 'success or failure' after 5m0s
not to have occurred

Failed: Pods should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:719
Jul  5 15:41:02.988: pod e2e-tests-pods-x418r/liveness-http - expected number of restarts: %!t(int=1), found restarts: %!t(int=0)

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/5687/

Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:389
Expected error:
    <*errors.errorString | 0xc8200ec0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28337

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jul 15 10:25:50.116: Couldn't delete ns "e2e-tests-kubectl-dz6vu": namespace e2e-tests-kubectl-dz6vu was not deleted within limit: timed out waiting for the condition, pods remaining: [redis-master-qsdya]

Issues about this test specifically: #27524

Failed: [k8s.io] Networking should function for intra-pod communication [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:225
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26960 #27235

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/5717/

Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:261
0: path /api/v1/namespaces/e2e-tests-proxy-2kr0x/pods/proxy-service-uc8qe-5w509/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'EOF'\nTrying to reach: 'http://10.180.1.4:80/'" field:"" > retryAfterSeconds:0  Code:503}

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:86
pod never became ready
Expected error:
    <*errors.errorString | 0xc8200e20b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jul 16 01:11:22.887: Couldn't delete ns "e2e-tests-kubectl-jghkp": namespace e2e-tests-kubectl-jghkp was not deleted within limit: timed out waiting for the condition, pods remaining: []

Issues about this test specifically: #27507 #28275

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc820d25b80>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred

Issues about this test specifically: #27196 #28998

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc8200d60b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/5997/

Multiple broken tests:

Failed: [k8s.io] Pods should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1104
Jul 21 23:41:40.497: pod e2e-tests-pods-j36l0/liveness-http - expected number of restarts: %!t(int=1), found restarts: %!t(int32=0)

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:153
Timed out after 300.000s.
Expected
    <string>: content of file "/etc/configmap-volume/data-1": value-1

to contain substring
    <string>: value-2

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/container_probe.go:111
pod should have a restart count of 0 but got 1
Expected
    <bool>: false
to be true

Issues about this test specifically: #28084

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:401
Expected error:
    <*errors.errorString | 0xc8201318c0>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.191.43 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-b34id run run-test --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820137420 Waiting for pod e2e-tests-kubectl-b34id/run-test-poge7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-b34id/run-test-poge7 to be running, status is Pending, pod ready: false\nError attaching, falling back to logs: ssh: unexpected packet in response to channel open: <nil>\n Error from server: Get https://gke-jenkins-e2e-default-pool-de589ab4-jru1:10250/containerLogs/e2e-tests-kubectl-b34id/run-test-poge7/run-test: write tcp 10.240.0.4:36616->104.198.37.54:22: use of closed network connection\n [] <nil> 0xc820137e60 exit status 1 <nil> true [0xc820032280 0xc8200322a8 0xc820032308] [0xc820032280 0xc8200322a8 0xc820032308] [0xc820032288 0xc8200322a0 0xc820032300] [0xa8fe20 0xa8ff80 0xa8ff80] 0xc820ef0960}:\nCommand stdout:\nWaiting for pod e2e-tests-kubectl-b34id/run-test-poge7 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-b34id/run-test-poge7 to be running, status is Pending, pod ready: false\nError attaching, falling back to logs: ssh: unexpected packet in response to channel open: <nil>\n\nstderr:\nError from server: Get https://gke-jenkins-e2e-default-pool-de589ab4-jru1:10250/containerLogs/e2e-tests-kubectl-b34id/run-test-poge7/run-test: write tcp 10.240.0.4:36616->104.198.37.54:22: use of closed network connection\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.191.43 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-b34id run run-test --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc820137420 Waiting for pod e2e-tests-kubectl-b34id/run-test-poge7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-b34id/run-test-poge7 to be running, status is Pending, pod ready: false
    Error attaching, falling back to logs: ssh: unexpected packet in response to channel open: <nil>
     Error from server: Get https://gke-jenkins-e2e-default-pool-de589ab4-jru1:10250/containerLogs/e2e-tests-kubectl-b34id/run-test-poge7/run-test: write tcp 10.240.0.4:36616->104.198.37.54:22: use of closed network connection
     [] <nil> 0xc820137e60 exit status 1 <nil> true [0xc820032280 0xc8200322a8 0xc820032308] [0xc820032280 0xc8200322a8 0xc820032308] [0xc820032288 0xc8200322a0 0xc820032300] [0xa8fe20 0xa8ff80 0xa8ff80] 0xc820ef0960}:
    Command stdout:
    Waiting for pod e2e-tests-kubectl-b34id/run-test-poge7 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-b34id/run-test-poge7 to be running, status is Pending, pod ready: false
    Error attaching, falling back to logs: ssh: unexpected packet in response to channel open: <nil>

    stderr:
    Error from server: Get https://gke-jenkins-e2e-default-pool-de589ab4-jru1:10250/containerLogs/e2e-tests-kubectl-b34id/run-test-poge7/run-test: write tcp 10.240.0.4:36616->104.198.37.54:22: use of closed network connection

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:261
3: path /api/v1/proxy/namespaces/e2e-tests-proxy-r3p7r/pods/http:proxy-service-e36wx-a0w93:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'EOF'\nTrying to reach: 'http://10.180.2.6:160/'" field:"" > retryAfterSeconds:0  Code:503}
4: path /api/v1/namespaces/e2e-tests-proxy-r3p7r/pods/https:proxy-service-e36wx-a0w93:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'EOF'\nTrying to reach: 'https://10.180.2.6:462/'" field:"" > retryAfterSeconds:0  Code:503}
4: path /api/v1/namespaces/e2e-tests-proxy-r3p7r/pods/https:proxy-service-e36wx-a0w93:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'EOF'\nTrying to reach: 'https://10.180.2.6:443/'" field:"" > retryAfterSeconds:0  Code:503}

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:335
Jul 21 23:40:08.282: Failed to read from kubectl port-forward stdout: EOF

Issues about this test specifically: #27673

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/6057/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jul 23 05:42:59.616: Couldn't delete ns "e2e-tests-kubectl-rkktd": namespace e2e-tests-kubectl-rkktd was not deleted within limit: timed out waiting for the condition, pods remaining: [redis-master-0nfkl]

Issues about this test specifically: #27524

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:759
Jul 23 05:43:44.538: Verified 0 of 1 pods , error : timed out waiting for the condition

Issues about this test specifically: #26139 #28342 #28439

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:209
Jul 23 05:45:41.828: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

Issues about this test specifically: #28565 #29072 #29390

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:56
Expected error:
    <*errors.errorString | 0xc8200e20b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27023

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc820b31b50>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred

Issues about this test specifically: #27196 #28998

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240
Expected error:
    <*errors.errorString | 0xc820980030>: {
        s: "gave up waiting for pod 'pod-service-account-4e610da0-50d2-11e6-b392-0242ac110003' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pod-service-account-4e610da0-50d2-11e6-b392-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:403
Jul 23 05:38:54.769: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready

Issues about this test specifically: #28064 #28569

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc820d5ec00>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred

Issues about this test specifically: #27443 #27835 #28900

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/6177/

Multiple broken tests:

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.errorString | 0xc8201e6e50>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-07-25 17:39:27 -0700 PDT} FinishedAt:{Time:2016-07-25 17:39:57 -0700 PDT} ContainerID:docker://efc4839bf8751c08f9395c0858fd2a1b68cf09884db44b34a6716eaa6d4d617d}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-07-25 17:39:27 -0700 PDT} FinishedAt:{Time:2016-07-25 17:39:57 -0700 PDT} ContainerID:docker://efc4839bf8751c08f9395c0858fd2a1b68cf09884db44b34a6716eaa6d4d617d}
not to have occurred

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:389
Expected error:
    <*errors.errorString | 0xc8200ec0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28337

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:960
Jul 25 17:46:43.691: expected un-ready endpoint for Service webserver within 5m0s, stdout: 

Issues about this test specifically: #26172

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:483
Jul 25 17:40:26.091: Missing KubeDNS in kubectl cluster-info

Issues about this test specifically: #28420

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:304
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26194 #26338

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:265
Jul 25 17:51:21.488: Frontend service did not start serving content in 600 seconds.

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jul 25 17:57:23.293: timeout waiting 15m0s for pods size to be 2

Issues about this test specifically: #27443 #27835 #28900

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200d60b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26168 #27450

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/6319/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:265
Jul 28 17:22:15.593: Frontend service did not start serving content in 600 seconds.

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:304
Expected error:
    <*errors.errorString | 0xc8200e20b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26194 #26338

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc820079f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:389
Expected error:
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28337

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jul 28 17:30:11.952: timeout waiting 15m0s for pods size to be 2

Issues about this test specifically: #27443 #27835 #28900

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:960
Jul 28 17:16:53.390: expected un-ready endpoint for Service webserver within 5m0s, stdout: 

Issues about this test specifically: #26172

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.errorString | 0xc820aebfc0>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-07-28 17:09:50 -0700 PDT} FinishedAt:{Time:2016-07-28 17:10:20 -0700 PDT} ContainerID:docker://76448e6852ee380b4556eb1feb1a4a173cdc13b4fda7087bcc4c3297ace87125}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-07-28 17:09:50 -0700 PDT} FinishedAt:{Time:2016-07-28 17:10:20 -0700 PDT} ContainerID:docker://76448e6852ee380b4556eb1feb1a4a173cdc13b4fda7087bcc4c3297ace87125}
not to have occurred

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:483
Jul 28 17:11:30.838: Missing KubeDNS in kubectl cluster-info

Issues about this test specifically: #28420

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:168
Expected error:
    <*errors.errorString | 0xc820ab1d40>: {
        s: "gave up waiting for pod 'pod-secrets-0407e1ab-5521-11e6-a5ea-0242ac110004' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pod-secrets-0407e1ab-5521-11e6-a5ea-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:279
Expected error:
    <*errors.errorString | 0xc820302ae0>: {
        s: "gave up waiting for pod 'pod-configmaps-11e9c5d9-5521-11e6-970b-0242ac110004' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pod-configmaps-11e9c5d9-5521-11e6-970b-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred

Issues about this test specifically: #29751

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/6522/

Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Aug  1 22:55:49.250: timeout waiting 15m0s for pods size to be 2

Issues about this test specifically: #27443 #27835 #28900

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:335
Aug  1 22:41:03.563: Failed to read from kubectl port-forward stdout: EOF

Issues about this test specifically: #27673

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:311
Expected error:
    <*errors.errorString | 0xc82096d0a0>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.138.153 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-wtxiz -i nginx bash] []  0xc8200ca078  Error from server: ssh: unexpected packet in response to channel open: <nil>\n [] <nil> 0xc82096bd60 exit status 1 <nil> true [0xc8200ca078 0xc8200ca098 0xc8200ca0c8] [0xc8200ca098 0xc8200ca0c8] [0xc8200ca090 0xc8200ca0a8] [0xa96280 0xa96280] 0xc820de7260}:\nCommand stdout:\n\nstderr:\nError from server: ssh: unexpected packet in response to channel open: <nil>\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.138.153 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-wtxiz -i nginx bash] []  0xc8200ca078  Error from server: ssh: unexpected packet in response to channel open: <nil>
     [] <nil> 0xc82096bd60 exit status 1 <nil> true [0xc8200ca078 0xc8200ca098 0xc8200ca0c8] [0xc8200ca098 0xc8200ca0c8] [0xc8200ca090 0xc8200ca0a8] [0xa96280 0xa96280] 0xc820de7260}:
    Command stdout:

    stderr:
    Error from server: ssh: unexpected packet in response to channel open: <nil>

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #28426

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:279
Expected error:
    <*errors.errorString | 0xc820566880>: {
        s: "gave up waiting for pod 'pod-configmaps-7026b8a0-5873-11e6-b89e-0242ac11000c' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pod-configmaps-7026b8a0-5873-11e6-b89e-0242ac11000c' to be 'success or failure' after 5m0s
not to have occurred

Issues about this test specifically: #29751

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/secrets.go:168
Expected error:
    <*errors.errorString | 0xc82026eb10>: {
        s: "gave up waiting for pod 'pod-secrets-660d1d33-5873-11e6-83c2-0242ac11000c' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pod-secrets-660d1d33-5873-11e6-83c2-0242ac11000c' to be 'success or failure' after 5m0s
not to have occurred

@fejta fejta closed this as completed Aug 4, 2016
@lavalamp
Copy link
Member

lavalamp commented Aug 4, 2016

@fejta did you close this because you fixed something about it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

No branches or pull requests

3 participants