Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-test: broken test run #28730

Closed
k8s-github-robot opened this issue Jul 9, 2016 · 15 comments
Closed

kubernetes-e2e-gke-test: broken test run #28730

k8s-github-robot opened this issue Jul 9, 2016 · 15 comments
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13355/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. labels Jul 9, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13351/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13354/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Jul 9, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13356/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jul 10, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13358/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13361/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13359/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13362/

Multiple broken tests:

Failed: [k8s.io] Ubernetes Lite should spread the pods of a service across zones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26678

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27115 #28070

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27507 #28275

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jul 11 13:56:50.467: All nodes should be ready after test, Get https://104.198.198.85/api/v1/nodes: x509: certificate signed by unknown authority

Issues about this test specifically: #26982

Failed: [k8s.io] Pods should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13368/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc820bf80f0>: {
        s: "timeout waiting 10m0s for node instance group size to be 2",
    }
    timeout waiting 10m0s for node instance group size to be 2
not to have occurred

Issues about this test specifically: #27233

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jul 13 00:17:03.015: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-55da731c-m44p   /api/v1/nodes/gke-jenkins-e2e-default-pool-55da731c-m44p b28f1432-48ae-11e6-868a-42010af00039 12861 0 {2016-07-12 21:03:03 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-55da731c-m44p beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.0.0/24 340665950542497008 gce://k8s-jkns-e2e-gke-test/us-central1-f/gke-jenkins-e2e-default-pool-55da731c-m44p false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-07-12 21:04:33 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-07-13 00:16:10 -0700 PDT} {2016-07-13 00:16:55 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-07-13 00:16:10 -0700 PDT} {2016-07-12 21:03:03 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-07-13 00:16:10 -0700 PDT} {2016-07-13 00:16:55 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.2} {ExternalIP 130.211.132.35}] {{10250}} { 73717D95-03CA-1A03-15A8-92F2AB47BB92 564a9ed1-c08a-4c30-b135-bcb03c4652a0 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://Unknown v1.3.0 v1.3.0 linux amd64} [] [] []}}]

Failed: [k8s.io] Pods should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jul 13 00:19:01.589: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-55da731c-m44p   /api/v1/nodes/gke-jenkins-e2e-default-pool-55da731c-m44p b28f1432-48ae-11e6-868a-42010af00039 12861 0 {2016-07-12 21:03:03 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-55da731c-m44p beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.0.0/24 340665950542497008 gce://k8s-jkns-e2e-gke-test/us-central1-f/gke-jenkins-e2e-default-pool-55da731c-m44p false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-07-12 21:04:33 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-07-13 00:16:10 -0700 PDT} {2016-07-13 00:16:55 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-07-13 00:16:10 -0700 PDT} {2016-07-12 21:03:03 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-07-13 00:16:10 -0700 PDT} {2016-07-13 00:16:55 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.2} {ExternalIP 130.211.132.35}] {{10250}} { 73717D95-03CA-1A03-15A8-92F2AB47BB92 564a9ed1-c08a-4c30-b135-bcb03c4652a0 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://Unknown v1.3.0 v1.3.0 linux amd64} [] [] []}}]

Issues about this test specifically: #28332

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jul 13 00:20:08.723: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-55da731c-m44p   /api/v1/nodes/gke-jenkins-e2e-default-pool-55da731c-m44p b28f1432-48ae-11e6-868a-42010af00039 12861 0 {2016-07-12 21:03:03 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-55da731c-m44p beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.0.0/24 340665950542497008 gce://k8s-jkns-e2e-gke-test/us-central1-f/gke-jenkins-e2e-default-pool-55da731c-m44p false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-07-12 21:04:33 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-07-13 00:16:10 -0700 PDT} {2016-07-13 00:16:55 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-07-13 00:16:10 -0700 PDT} {2016-07-12 21:03:03 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-07-13 00:16:10 -0700 PDT} {2016-07-13 00:16:55 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.2} {ExternalIP 130.211.132.35}] {{10250}} { 73717D95-03CA-1A03-15A8-92F2AB47BB92 564a9ed1-c08a-4c30-b135-bcb03c4652a0 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://Unknown v1.3.0 v1.3.0 linux amd64} [] [] []}}]

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication on a single node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Jul 13 00:21:21.259: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-55da731c-0c7t   /api/v1/nodes/gke-jenkins-e2e-default-pool-55da731c-0c7t 5a4f912f-48ca-11e6-a84a-42010af0001c 12999 0 {2016-07-13 00:21:01 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-55da731c-0c7t beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 190577712793409383 gce://k8s-jkns-e2e-gke-test/us-central1-f/gke-jenkins-e2e-default-pool-55da731c-0c7t false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable True {0001-01-01 00:00:00 +0000 UTC} {2016-07-13 00:21:01 -0700 PDT} NoRouteCreated Node created without a route} {OutOfDisk False {2016-07-13 00:21:18 -0700 PDT} {2016-07-13 00:21:01 -0700 PDT} KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False {2016-07-13 00:21:18 -0700 PDT} {2016-07-13 00:21:01 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready False {2016-07-13 00:21:18 -0700 PDT} {2016-07-13 00:21:01 -0700 PDT} KubeletNotReady ConfigureCBR0 requested, but PodCIDR not set. Will not configure CBR0 right now,container runtime is down. WARNING: CPU hardcapping unsupported}] [{InternalIP 10.240.0.4} {ExternalIP 104.198.230.158}] {{10250}} { AC5F9C0E-506E-ADEA-2133-D692FC2ABFDC 780775bb-5c59-445c-8252-3d628fd342ac 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://Unknown v1.3.0 v1.3.0 linux amd64} [] [] []}}]

Issues about this test specifically: #28827

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13377/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13403/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13399/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13400/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13401/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13405/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13409/

Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28657

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Mesos starts static pods on every node in the mesos cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27360 #28096

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26780

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27503

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28339

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Expected error:
    <*url.Error | 0xc8220451d0>: {
        Op: "Get",
        URL: "https://104.197.241.131/apis/extensions/v1beta1/namespaces/e2e-tests-horizontal-pod-autoscaling-tu4qh/replicasets/rs",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\xc5\xf1\x83",
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.197.241.131/apis/extensions/v1beta1/namespaces/e2e-tests-horizontal-pod-autoscaling-tu4qh/replicasets/rs: dial tcp 104.197.241.131:443: getsockopt: connection refused
not to have occurred

Issues about this test specifically: #27397 #27917

Failed: ThirdParty resources Simple Third Party creating/deleting thirdparty objects works [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28426

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27673

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27479 #27675 #28097

Failed: [k8s.io] Pods should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200e80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28332

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

No branches or pull requests

2 participants