Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-prod: broken test run #39885

Closed
k8s-github-robot opened this issue Jan 13, 2017 · 6 comments
Closed

ci-kubernetes-e2e-gci-gke-prod: broken test run #39885

k8s-github-robot opened this issue Jan 13, 2017 · 6 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod/214/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82157e630>: {
        s: "Namespace e2e-tests-services-umdfe is active",
    }
    Namespace e2e-tests-services-umdfe is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821afd410>: {
        s: "Namespace e2e-tests-services-umdfe is active",
    }
    Namespace e2e-tests-services-umdfe is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82185f7c0>: {
        s: "Namespace e2e-tests-services-umdfe is active",
    }
    Namespace e2e-tests-services-umdfe is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82133fca0>: {
        s: "Namespace e2e-tests-services-umdfe is active",
    }
    Namespace e2e-tests-services-umdfe is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821588de0>: {
        s: "Namespace e2e-tests-services-umdfe is active",
    }
    Namespace e2e-tests-services-umdfe is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821480a80>: {
        s: "Namespace e2e-tests-services-umdfe is active",
    }
    Namespace e2e-tests-services-umdfe is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821bc24f0>: {
        s: "Namespace e2e-tests-services-umdfe is active",
    }
    Namespace e2e-tests-services-umdfe is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428
Expected error:
    <*errors.errorString | 0xc82170db50>: {
        s: "error while stopping RC: service2: Get https://130.211.228.109/api/v1/namespaces/e2e-tests-services-umdfe/replicationcontrollers/service2: dial tcp 130.211.228.109:443: getsockopt: connection refused",
    }
    error while stopping RC: service2: Get https://130.211.228.109/api/v1/namespaces/e2e-tests-services-umdfe/replicationcontrollers/service2: dial tcp 130.211.228.109:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8219e75e0>: {
        s: "Namespace e2e-tests-services-umdfe is active",
    }
    Namespace e2e-tests-services-umdfe is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #35279

Previous issues for this suite: #37794 #39046

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Jan 13, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod/218/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82087acd0>: {
        s: "Namespace e2e-tests-container-probe-cem2q is active",
    }
    Namespace e2e-tests-container-probe-cem2q is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82138e1c0>: {
        s: "Namespace e2e-tests-container-probe-cem2q is active",
    }
    Namespace e2e-tests-container-probe-cem2q is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8210d4690>: {
        s: "Namespace e2e-tests-container-probe-cem2q is active",
    }
    Namespace e2e-tests-container-probe-cem2q is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:233
getting pod 
Expected error:
    <*url.Error | 0xc820d08e40>: {
        Op: "Get",
        URL: "https://104.198.170.131/api/v1/namespaces/e2e-tests-container-probe-cem2q/pods/liveness-http",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhƪ\x83",
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.198.170.131/api/v1/namespaces/e2e-tests-container-probe-cem2q/pods/liveness-http: dial tcp 104.198.170.131:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:350

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821419460>: {
        s: "Namespace e2e-tests-container-probe-cem2q is active",
    }
    Namespace e2e-tests-container-probe-cem2q is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821515850>: {
        s: "Namespace e2e-tests-container-probe-cem2q is active",
    }
    Namespace e2e-tests-container-probe-cem2q is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820970360>: {
        s: "Namespace e2e-tests-container-probe-cem2q is active",
    }
    Namespace e2e-tests-container-probe-cem2q is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82196e840>: {
        s: "Namespace e2e-tests-container-probe-cem2q is active",
    }
    Namespace e2e-tests-container-probe-cem2q is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821932680>: {
        s: "Namespace e2e-tests-container-probe-cem2q is active",
    }
    Namespace e2e-tests-container-probe-cem2q is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8211e8140>: {
        s: "Namespace e2e-tests-container-probe-cem2q is active",
    }
    Namespace e2e-tests-container-probe-cem2q is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod/268/
Multiple broken tests:

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc42231c5f0>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:598

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 27 19:27:47.255: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Jan 27 22:12:43.797: CPU usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-6a0444c8-wx8c:
 container "kubelet": expected 95th% usage < 0.500; got 0.583
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod/275/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*errors.StatusError | 0xc422bbf180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.156.2.218:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'\") has prevented the request from succeeding (post services test-deployment-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "test-deployment-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.156.2.218:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.156.2.218:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'") has prevented the request from succeeding (post services test-deployment-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:212

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 30 02:59:27.753: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:205
starting pod liveness-http in namespace e2e-tests-container-probe-8v94q
Expected error:
    <*errors.errorString | 0xc42038ed10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:364

Issues about this test specifically: #37314

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod/278/
Multiple broken tests:

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:42
Expected error:
    <*errors.errorString | 0xc422718050>: {
        s: "expected pod \"pod-configmaps-c90f5510-e7bb-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-configmaps-c90f5510-e7bb-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-c90f5510-e7bb-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-configmaps-c90f5510-e7bb-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34827

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
Expected error:
    <*errors.errorString | 0xc4219c7a50>: {
        s: "expected pod \"pod-secrets-40f9da97-e7d6-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-secrets-40f9da97-e7d6-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-40f9da97-e7d6-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-secrets-40f9da97-e7d6-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:203
Expected error:
    <*errors.errorString | 0xc42133c8b0>: {
        s: "expected pod \"downwardapi-volume-1680d50e-e7a9-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'downwardapi-volume-1680d50e-e7a9-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-1680d50e-e7a9-11e6-b60d-0242ac110008" success: gave up waiting for pod 'downwardapi-volume-1680d50e-e7a9-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37531

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:269
Expected error:
    <*errors.errorString | 0xc4223f4ff0>: {
        s: "expected pod \"pod-configmaps-98ecc75a-e7be-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-configmaps-98ecc75a-e7be-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-98ecc75a-e7be-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-configmaps-98ecc75a-e7be-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37515

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:162
Expected error:
    <*errors.errorString | 0xc4213b1f80>: {
        s: "expected pod \"downwardapi-volume-c13409ce-e7a5-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'downwardapi-volume-c13409ce-e7a5-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-c13409ce-e7a5-11e6-b60d-0242ac110008" success: gave up waiting for pod 'downwardapi-volume-c13409ce-e7a5-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36694

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:105
Expected error:
    <*errors.errorString | 0xc42201a780>: {
        s: "expected pod \"pod-dc378e43-e7cf-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-dc378e43-e7cf-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-dc378e43-e7cf-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-dc378e43-e7cf-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #26780

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:51
Expected error:
    <*errors.errorString | 0xc421dd7f20>: {
        s: "expected pod \"pod-secrets-b6db5e63-e7c4-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-secrets-b6db5e63-e7c4-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-b6db5e63-e7c4-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-secrets-b6db5e63-e7c4-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc4215a0860>: {
        s: "expected pod \"downwardapi-volume-56b7848c-e7ba-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'downwardapi-volume-56b7848c-e7ba-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-56b7848c-e7ba-11e6-b60d-0242ac110008" success: gave up waiting for pod 'downwardapi-volume-56b7848c-e7ba-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37423

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:68
Expected error:
    <*errors.errorString | 0xc421a07140>: {
        s: "expected pod \"pod-configmaps-fbc45faa-e7d6-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-configmaps-fbc45faa-e7d6-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-fbc45faa-e7d6-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-configmaps-fbc45faa-e7d6-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:59
Expected error:
    <*errors.errorString | 0xc4229d6080>: {
        s: "expected pod \"pod-configmaps-3c9ab10e-e7bd-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-configmaps-3c9ab10e-e7bd-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-3c9ab10e-e7bd-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-configmaps-3c9ab10e-e7bd-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32949

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:58
Expected error:
    <*errors.errorString | 0xc422296560>: {
        s: "expected pod \"downwardapi-volume-dde6b7c9-e7c1-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'downwardapi-volume-dde6b7c9-e7c1-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-dde6b7c9-e7c1-11e6-b60d-0242ac110008" success: gave up waiting for pod 'downwardapi-volume-dde6b7c9-e7c1-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36300

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc420d3af00>: {
        s: "expected pod \"pod-073e1531-e7a8-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-073e1531-e7a8-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-073e1531-e7a8-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-073e1531-e7a8-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:48
Expected error:
    <*errors.errorString | 0xc4206ee830>: {
        s: "expected pod \"downwardapi-volume-4212d758-e798-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'downwardapi-volume-4212d758-e798-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-4212d758-e798-11e6-b60d-0242ac110008" success: gave up waiting for pod 'downwardapi-volume-4212d758-e798-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #31836

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc4229d7eb0>: {
        s: "expected pod \"pod-configmaps-8a70dddc-e7d4-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-configmaps-8a70dddc-e7d4-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-8a70dddc-e7d4-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-configmaps-8a70dddc-e7d4-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #27245

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:56
Expected error:
    <*errors.errorString | 0xc421e7c320>: {
        s: "expected pod \"pod-secrets-6d6fa9b0-e7b8-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-secrets-6d6fa9b0-e7b8-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-6d6fa9b0-e7b8-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-secrets-6d6fa9b0-e7b8-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37529

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc421fdf530>: {
        s: "expected pod \"pod-72b39414-e7cc-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-72b39414-e7cc-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-72b39414-e7cc-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-72b39414-e7cc-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #31400

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc4224460d0>: {
        s: "expected pod \"pod-497686c1-e7c0-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-497686c1-e7c0-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-497686c1-e7c0-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-497686c1-e7c0-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37500

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
Expected error:
    <*errors.errorString | 0xc42146bf60>: {
        s: "expected pod \"pod-0dfd5292-e7a3-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-0dfd5292-e7a3-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-0dfd5292-e7a3-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-0dfd5292-e7a3-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #33987

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 31 04:12:24.099: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:37
Expected error:
    <*errors.errorString | 0xc422446ec0>: {
        s: "expected pod \"pod-configmaps-8ba5413f-e7bf-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-configmaps-8ba5413f-e7bf-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-8ba5413f-e7bf-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-configmaps-8ba5413f-e7bf-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29052

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:171
Expected error:
    <*errors.errorString | 0xc422806f30>: {
        s: "expected pod \"downwardapi-volume-80f59a73-e7d5-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'downwardapi-volume-80f59a73-e7d5-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-80f59a73-e7d5-11e6-b60d-0242ac110008" success: gave up waiting for pod 'downwardapi-volume-80f59a73-e7d5-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124
Expected error:
    <*errors.errorString | 0xc4203aad90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:64
Expected error:
    <*errors.errorString | 0xc4213b02c0>: {
        s: "expected pod \"pod-configmaps-05e3d953-e7a5-11e6-b60d-0242ac110008\" success: gave up waiting for pod 'pod-configmaps-05e3d953-e7a5-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-05e3d953-e7a5-11e6-b60d-0242ac110008" success: gave up waiting for pod 'pod-configmaps-05e3d953-e7a5-11e6-b60d-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35790

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:154
Expected error:
    <*errors.errorString | 0xc4203aad90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc4203aad90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod/287/
Multiple broken tests:

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Feb  3 07:00:13.274: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 08:02:29.717: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422cd64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 08:37:10.202: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422474ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 08:10:16.828: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b598f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 08:29:57.770: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422b6a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Feb  3 07:20:23.046: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc420350d20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 08:05:45.351: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422b7aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 08:13:35.209: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422738ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 07:51:32.787: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42278b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 07:55:03.750: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421668ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 07:59:06.927: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217b64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38511

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  3 08:33:22.768: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421dee4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: DiffResources {e2e.go}

Error: 2 leaked resources
[ disks ]
+NAME                                                             ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-835e-pvc-f4310a70-ea21-11e6-af0f-42010a8000ea  us-central1-b  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-prod/294/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Feb  5 16:34:22.844: Node gke-bootstrap-e2e-default-pool-4d8f4b5b-l9tk did not become ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:291

Issues about this test specifically: #37259

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Feb  5 14:16:19.331: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1166

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc42038acb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #33285

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

2 participants