Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-soak-gke-test: broken test run #37898

Closed
k8s-github-robot opened this issue Dec 2, 2016 · 149 comments
Closed

ci-kubernetes-soak-gke-test: broken test run #37898

k8s-github-robot opened this issue Dec 2, 2016 · 149 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/145/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 2, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/187/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/188/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/189/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/405/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/489/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Jan  2 20:53:48.887: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-63f548dc-t7v7:
 container "runtime": expected RSS memory (MB) < 314572800; got 327909376
node gke-bootstrap-e2e-default-pool-63f548dc-x6ws:
 container "runtime": expected RSS memory (MB) < 314572800; got 357244928
node gke-bootstrap-e2e-default-pool-63f548dc-zzl2:
 container "runtime": expected RSS memory (MB) < 314572800; got 332374016
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan  2 21:23:09.549: Couldn't delete ns: "e2e-tests-sched-pred-g15cq": namespace e2e-tests-sched-pred-g15cq was not deleted with limit: timed out waiting for the condition, pods remaining: 9, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-sched-pred-g15cq was not deleted with limit: timed out waiting for the condition, pods remaining: 9, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Jan  3 01:29:57.922: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-63f548dc-x6ws:
 container "runtime": expected RSS memory (MB) < 131072000; got 133832704
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/517/
Multiple broken tests:

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:138
Expected error:
    <*errors.errorString | 0xc421a63370>: {
        s: "timeout waiting 15m0s for cluster size to be 3",
    }
    timeout waiting 15m0s for cluster size to be 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:134

Issues about this test specifically: #36457

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:62
Expected error:
    <*errors.errorString | 0xc42038e670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:61

Issues about this test specifically: #31938

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:207
Expected error:
    <*errors.errorString | 0xc42038e670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:169

Issues about this test specifically: #28283

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc422880010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:889

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan 10 12:08:01.564: Couldn't delete ns: "e2e-tests-disruption-pl3v7": namespace e2e-tests-disruption-pl3v7 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-pl3v7 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:279
error waiting for daemon pod to not be running on nodes
Expected error:
    <*errors.errorString | 0xc42038e670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:274

Issues about this test specifically: #30441

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc422d94b70>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:777
Jan 10 08:02:45.581: Could not reach HTTP service through 35.184.38.229:30360 after 5m0s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:669

Issues about this test specifically: #26134

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421e9e5c0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc4228805f0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:398

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc422dae010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:479

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc42038e670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:66
Expected error:
    <*errors.StatusError | 0xc421b3c580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'ssh: rejected: administratively prohibited (open failed)'\\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-8b56d03d-lcg3:10250/logs/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'ssh: rejected: administratively prohibited (open failed)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-8b56d03d-lcg3:10250/logs/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'ssh: rejected: administratively prohibited (open failed)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-8b56d03d-lcg3:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:326

Issues about this test specifically: #35422

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:103
Expected error:
    <*errors.errorString | 0xc421d1a410>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:81

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc42038e670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550

Issues about this test specifically: #33887

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/841/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] AppArmor should enforce an AppArmor profile {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36457

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #35277

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #35579

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Jan 19 17:16:45.487: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-19aeb49e-pwkz:
 container "runtime": expected RSS memory (MB) < 314572800; got 314720256
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #37428

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27360 #28096 #29615 #31775 #35750

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28067 #28378 #32692 #33256 #34654

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36242

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36569 #38446

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Jan 19 16:01:09.787: Failed to delete netserver-0 pod: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:566

Issues about this test specifically: #33887

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #29994

Failed: [k8s.io] Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] CronJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] CronJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #29516

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Mesos starts static pods on every node in the mesos cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #32639

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #31428

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 19 20:06:28.776: Couldn't delete ns: "e2e-tests-services-hx899": Get https://104.154.198.100/api: dial tcp 104.154.198.100:443: getsockopt: connection refused (&url.Error{Op:"Get", URL:"https://104.154.198.100/api", Err:(*net.OpError)(0xc421b37900)})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28346

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends data, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34064

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #32087

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34250

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28003

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36109

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34372

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #29828

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34367

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34520

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #37027

Failed: [k8s.io] Garbage collector should orphan pods created by rc if delete options say so {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27079

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy stateful pod restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #38573

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42037f3d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1059/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Jan 28 20:17:16.196: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-900d481e-t3g6:
 container "runtime": expected RSS memory (MB) < 314572800; got 329793536
node gke-bootstrap-e2e-default-pool-900d481e-wr9h:
 container "runtime": expected RSS memory (MB) < 314572800; got 336932864
node gke-bootstrap-e2e-default-pool-900d481e-x1wv:
 container "runtime": expected RSS memory (MB) < 314572800; got 322551808
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:561
Not scheduled Pods: []v1.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Jan 28 21:39:04.276: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-900d481e-t3g6:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73584640
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1060/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Jan 29 02:21:36.219: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-900d481e-wr9h:
 container "runtime": expected RSS memory (MB) < 314572800; got 341553152
node gke-bootstrap-e2e-default-pool-900d481e-x1wv:
 container "runtime": expected RSS memory (MB) < 314572800; got 329285632
node gke-bootstrap-e2e-default-pool-900d481e-t3g6:
 container "runtime": expected RSS memory (MB) < 314572800; got 329879552
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Jan 29 03:00:21.519: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-900d481e-t3g6:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74063872
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:561
Not scheduled Pods: []v1.Pod(nil)
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:933

Issues about this test specifically: #30078 #30142

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1071/
Multiple broken tests:

Failed: [k8s.io] kubelet [k8s.io] host cleanup with volume mounts [HostCleanup] Host cleanup after pod using NFS mount is deleted [Volume][NFS] after deleting the nfs-server, the host should be cleaned-up when deleting sleeping pod which mounts an NFS vol [Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:473
Expected error:
    <*errors.errorString | 0xc42032e6d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:282

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72
Some log lines are still missing
Expected
    <int>: 100
to equal
    <int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:71

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: [k8s.io] kubelet [k8s.io] host cleanup with volume mounts [HostCleanup] Host cleanup after pod using NFS mount is deleted [Volume][NFS] after deleting the nfs-server, the host should be cleaned-up when deleting a pod accessing the NFS vol [Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:473
Expected error:
    <*errors.errorString | 0xc42032e6d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:282

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1073/
Multiple broken tests:

Failed: [k8s.io] kubelet [k8s.io] host cleanup with volume mounts [HostCleanup] Host cleanup after pod using NFS mount is deleted [Volume][NFS] after deleting the nfs-server, the host should be cleaned-up when deleting a pod accessing the NFS vol [Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:473
Expected error:
    <*errors.errorString | 0xc42036fa90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:282

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] kubelet [k8s.io] host cleanup with volume mounts [HostCleanup] Host cleanup after pod using NFS mount is deleted [Volume][NFS] after deleting the nfs-server, the host should be cleaned-up when deleting sleeping pod which mounts an NFS vol [Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:473
Expected error:
    <*errors.errorString | 0xc42036fa90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:282

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:100
Expected error:
    <*errors.errorString | 0xc422c44b70>: {
        s: "deployment \"nginx\" never updated with the desired condition and reason: [{Available False 2017-02-01 17:16:16.581589729 -0800 PST 2017-02-01 17:16:16.581589907 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-01 17:16:16.612231721 -0800 PST 2017-02-01 17:16:16.570699293 -0800 PST ReplicaSetUpdated ReplicaSet \"nginx-1638191467\" is progressing.}]",
    }
    deployment "nginx" never updated with the desired condition and reason: [{Available False 2017-02-01 17:16:16.581589729 -0800 PST 2017-02-01 17:16:16.581589907 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-01 17:16:16.612231721 -0800 PST 2017-02-01 17:16:16.570699293 -0800 PST ReplicaSetUpdated ReplicaSet "nginx-1638191467" is progressing.}]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1259

Issues about this test specifically: #31697 #36574 #39785

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1074/
Multiple broken tests:

Failed: [k8s.io] kubelet [k8s.io] host cleanup with volume mounts [HostCleanup] Host cleanup after pod using NFS mount is deleted [Volume][NFS] after deleting the nfs-server, the host should be cleaned-up when deleting sleeping pod which mounts an NFS vol [Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:473
Expected error:
    <*errors.errorString | 0xc42031f560>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:282

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:777
Feb  1 22:11:58.192: Failed to get Service "mutability-test": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-6n9hr/services/mutability-test\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/732199677957/bootstrap-e2e/authorize: http2: server sent GOAWAY and closed the connection; LastStreamID=3731, ErrCode=NO_ERROR, debug=\"max_age\"") has prevented the request from succeeding (get services mutability-test)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:410

Issues about this test specifically: #26134

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] kubelet [k8s.io] host cleanup with volume mounts [HostCleanup] Host cleanup after pod using NFS mount is deleted [Volume][NFS] after deleting the nfs-server, the host should be cleaned-up when deleting a pod accessing the NFS vol [Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:473
Expected error:
    <*errors.errorString | 0xc42031f560>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:282

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1075/
Multiple broken tests:

Failed: [k8s.io] kubelet [k8s.io] host cleanup with volume mounts [HostCleanup] Host cleanup after pod using NFS mount is deleted [Volume][NFS] after deleting the nfs-server, the host should be cleaned-up when deleting sleeping pod which mounts an NFS vol [Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:473
Expected error:
    <*errors.errorString | 0xc420383d20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:282

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb  2 03:34:31.058: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-73501d01-zkqp:
 container "runtime": expected RSS memory (MB) < 314572800; got 315318272
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] kubelet [k8s.io] host cleanup with volume mounts [HostCleanup] Host cleanup after pod using NFS mount is deleted [Volume][NFS] after deleting the nfs-server, the host should be cleaned-up when deleting a pod accessing the NFS vol [Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:473
Expected error:
    <*errors.errorString | 0xc420383d20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:282

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1076/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc422b023d0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:79

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc423900000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:79

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb  2 08:18:53.566: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-73501d01-rvgf:
 container "runtime": expected RSS memory (MB) < 314572800; got 318636032
node gke-bootstrap-e2e-default-pool-73501d01-zkqp:
 container "runtime": expected RSS memory (MB) < 314572800; got 317489152
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] kubelet [k8s.io] host cleanup with volume mounts [HostCleanup] Host cleanup after pod using NFS mount is deleted [Volume][NFS] after deleting the nfs-server, the host should be cleaned-up when deleting sleeping pod which mounts an NFS vol [Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:473
Expected error:
    <*errors.errorString | 0xc42031ed90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:282

Failed: [k8s.io] kubelet [k8s.io] host cleanup with volume mounts [HostCleanup] Host cleanup after pod using NFS mount is deleted [Volume][NFS] after deleting the nfs-server, the host should be cleaned-up when deleting a pod accessing the NFS vol [Serial] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:473
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:233

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc4226f65c0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:79

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:101
Expected error:
    <*errors.errorString | 0xc4239c4bd0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:79

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1675/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 23 19:31:22.808: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-r4mm:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105623552, container "runtime": expected RSS memory (MB) < 314572800; got 319037440
node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 104857600; got 110469120, container "runtime": expected RSS memory (MB) < 314572800; got 332185600
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 23 20:32:20.009: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-r4mm:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74977280
node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82956288
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:208
Feb 23 18:02:05.012: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:204

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1676/
Multiple broken tests:

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:208
Feb 24 01:58:27.486: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:204

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 23 23:27:12.241: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-r4mm:
 container "kubelet": expected RSS memory (MB) < 104857600; got 106762240, container "runtime": expected RSS memory (MB) < 314572800; got 322154496
node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 104857600; got 109731840, container "runtime": expected RSS memory (MB) < 314572800; got 330489856
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 24 00:00:26.256: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-qshk:
 container "kubelet": expected RSS memory (MB) < 73400320; got 75718656
node gke-bootstrap-e2e-default-pool-94b1def0-r4mm:
 container "kubelet": expected RSS memory (MB) < 73400320; got 76443648
node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 73400320; got 83824640
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1686/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 24 15:27:10.905: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-qshk:
 container "kubelet": expected RSS memory (MB) < 104857600; got 112377856
node gke-bootstrap-e2e-default-pool-94b1def0-r4mm:
 container "kubelet": expected RSS memory (MB) < 104857600; got 109350912, container "runtime": expected RSS memory (MB) < 314572800; got 331726848
node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 104857600; got 110903296, container "runtime": expected RSS memory (MB) < 314572800; got 334995456
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 24 17:06:12.653: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-qshk:
 container "kubelet": expected RSS memory (MB) < 73400320; got 86130688
node gke-bootstrap-e2e-default-pool-94b1def0-r4mm:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78430208
node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 73400320; got 84705280
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:208
Feb 24 20:08:18.897: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:204

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1692/
Multiple broken tests:

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Feb 26 12:08:45.600: Couldn't delete ns: "e2e-tests-nslifetest-3-5r0f8": Operation cannot be fulfilled on namespaces "e2e-tests-nslifetest-3-5r0f8": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nslifetest-3-5r0f8\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc4224eceb0), Code:409}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:276

Issues about this test specifically: #27957

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 26 13:30:44.108: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 104857600; got 115998720, container "runtime": expected RSS memory (MB) < 314572800; got 353509376
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 26 13:55:18.039: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 73400320; got 87101440
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1695/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 27 10:14:22.476: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88330240
node gke-bootstrap-e2e-default-pool-94b1def0-sw5r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77107200
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:208
Feb 27 04:39:16.177: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:204

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 27 05:12:15.838: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 104857600; got 117243904, container "runtime": expected RSS memory (MB) < 314572800; got 356151296
node gke-bootstrap-e2e-default-pool-94b1def0-sw5r:
 container "runtime": expected RSS memory (MB) < 314572800; got 319053824
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1697/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:208
Feb 27 18:34:30.655: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:204

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 27 19:10:09.601: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-sw5r:
 container "kubelet": expected RSS memory (MB) < 104857600; got 107642880, container "runtime": expected RSS memory (MB) < 314572800; got 322969600
node gke-bootstrap-e2e-default-pool-94b1def0-vdl9:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105811968, container "runtime": expected RSS memory (MB) < 314572800; got 321056768
node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 104857600; got 117272576, container "runtime": expected RSS memory (MB) < 314572800; got 364191744
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Feb 27 23:30:34.552: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-94b1def0-sw5r:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79347712
node gke-bootstrap-e2e-default-pool-94b1def0-vdl9:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74444800
node gke-bootstrap-e2e-default-pool-94b1def0-zb7w:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88219648, container "runtime": expected RSS memory (MB) < 131072000; got 133038080
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1699/
Multiple broken tests:

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:242
Expected error:
    <*errors.errorString | 0xc4203d2710>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:414

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:189
Feb 28 11:43:43.237: Failed to evict Pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:185

Failed: [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:264
Feb 28 12:08:43.429: Pod wasn't evicted
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:259

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1702/
Multiple broken tests:

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc421812480>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  1 11:34:20.273: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-cbd3c169-zlrc:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74317824
node gke-bootstrap-e2e-default-pool-cbd3c169-pkp6:
 container "kubelet": expected RSS memory (MB) < 73400320; got 75182080
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  1 12:28:28.204: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-cbd3c169-zlrc:
 container "kubelet": expected RSS memory (MB) < 104857600; got 105201664
node gke-bootstrap-e2e-default-pool-cbd3c169-pkp6:
 container "runtime": expected RSS memory (MB) < 314572800; got 317554688, container "kubelet": expected RSS memory (MB) < 104857600; got 107925504
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/1703/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  1 18:24:13.886: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-cbd3c169-pkp6:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77717504
node gke-bootstrap-e2e-default-pool-cbd3c169-zlrc:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78143488
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Mar  1 18:52:06.766: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-cbd3c169-pkp6:
 container "runtime": expected RSS memory (MB) < 314572800; got 322285568, container "kubelet": expected RSS memory (MB) < 104857600; got 108126208
node gke-bootstrap-e2e-default-pool-cbd3c169-zlrc:
 container "kubelet": expected RSS memory (MB) < 104857600; got 110219264
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:70
Expected error:
    <*errors.errorString | 0xc423169120>: {
        s: "Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for replication controller kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:68

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2128/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:62
Apr 19 03:08:31.122: Number of replicas has changed: expected 3, got 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:351

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64
Expected error:
    <*errors.StatusError | 0xc420ee0600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz:4194/containers/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz:4194/containers/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #35297

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 19 05:24:31.622: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2129/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 19 13:56:42.160: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74186752
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 19 14:08:57.586: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2130/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 19 17:17:36.130: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 19 21:24:06.197: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74555392
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-q1hz
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2131/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 20 02:36:47.165: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-szxj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74592256
node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 76619776
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 19 23:46:38.478: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2132/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 20 07:00:49.969: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78950400
node gke-bootstrap-e2e-default-pool-1c94ad8f-szxj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77963264
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 20 08:36:15.510: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2133/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 20 13:30:51.819: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78491648
node gke-bootstrap-e2e-default-pool-1c94ad8f-szxj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80224256
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 20 16:48:16.314: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "runtime": expected RSS memory (MB) < 314572800; got 318013440
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2134/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 20 22:51:33.533: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 74547200
node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82354176
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 20 23:47:11.715: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "runtime": expected RSS memory (MB) < 314572800; got 324022272
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2135/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 21 09:42:40.080: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85041152
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 21 12:17:28.967: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64
Expected error:
    <*errors.StatusError | 0xc42131e080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-1c94ad8f-szxj:4194/containers/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-1c94ad8f-szxj:4194/containers/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-1c94ad8f-szxj:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #35297

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 21 07:11:12.577: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "runtime": expected RSS memory (MB) < 314572800; got 326705152
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2137/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 22 00:22:39.819: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 22 00:46:34.304: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "runtime": expected RSS memory (MB) < 314572800; got 327741440
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 21 22:46:40.179: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85995520
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2138/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 22 06:04:28.017: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "runtime": expected RSS memory (MB) < 314572800; got 325677056
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 22 08:05:07.299: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-szxj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79867904
node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 87990272
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2140/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 22 23:17:03.639: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-szxj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81330176
node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 86589440
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 23 01:14:17.064: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 23 02:43:11.877: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "runtime": expected RSS memory (MB) < 314572800; got 332054528
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2141/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 23 08:10:29.969: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-szxj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81805312
node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 88117248
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 23 09:14:27.773: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "runtime": expected RSS memory (MB) < 314572800; got 333930496
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2142/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 23 19:58:26.007: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8:
 container "runtime": expected RSS memory (MB) < 314572800; got 314871808
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2143/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 23 21:30:55.190: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8:
 container "runtime": expected RSS memory (MB) < 314572800; got 320643072
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 23 22:00:54.694: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 76263424
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 23 22:14:12.763: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2144/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 24 06:38:40.010: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8:
 container "runtime": expected RSS memory (MB) < 314572800; got 343494656
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 24 08:16:37.935: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 24 09:36:29.826: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-szxj:
 container "kubelet": expected RSS memory (MB) < 73400320; got 76984320
node gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 76288000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2145/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Volumes [Volume] [k8s.io] ConfigMap should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #29828

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #34104

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] ReplicaSet should adopt matching pods on creation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #29461

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #37502

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #35793

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #36706

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] CronJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Projected should provide container's memory request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc421f25cd0>: {
        s: "Namespace e2e-tests-deployment-k07fw is active",
    }
    Namespace e2e-tests-deployment-k07fw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Issues about this test specifically: #29516

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #32936

Failed: [k8s.io] Sysctls should support sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that satisify the PodAffinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #27360 #28096 #29615 #31775 #35750

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc421baeb00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-svcaccounts-ln22p/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-svcaccounts-ln22p/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-svcaccounts-ln22p/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.StatusError | 0xc4218f9f00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-namespaces-j20kw/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-namespaces-j20kw/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-namespaces-j20kw/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #27957

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:85
Expected error:
    <*errors.errorString | 0xc42256bac0>: {
        s: "Namespace e2e-tests-deployment-k07fw is active",
    }
    Namespace e2e-tests-deployment-k07fw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:81

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #32830

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:162
Expected error:
    <*errors.errorString | 0xc422806190>: {
        s: "Namespace e2e-tests-deployment-k07fw is active",
    }
    Namespace e2e-tests-deployment-k07fw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/taints_test.go:161

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Projected should set mode on item file [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Volume Placement [Volume] should create and delete pod with multiple volumes from different datastore {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:641
Expected error:
    <*errors.StatusError | 0xc422308400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/apis/authorization.k8s.io/v1beta1/subjectaccessreviews\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (post subjectaccessreviews.authorization.k8s.io)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "authorization.k8s.io",
                Kind: "subjectaccessreviews",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/apis/authorization.k8s.io/v1beta1/subjectaccessreviews\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/apis/authorization.k8s.io/v1beta1/subjectaccessreviews\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (post subjectaccessreviews.authorization.k8s.io)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:592

Failed: [k8s.io] CronJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #31873

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc42254be20>: {
        s: "deployment \"test-rollback-no-revision-deployment\" failed to create new replica set",
    }
    deployment "test-rollback-no-revision-deployment" failed to create new replica set
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:748

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #28346

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:85
Expected error:
    <*errors.errorString | 0xc4220d4520>: {
        s: "Namespace e2e-tests-pod-disks-rsxz2 is active",
    }
    Namespace e2e-tests-pod-disks-rsxz2 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:81

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #34367

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] HostPath should support subPath [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #27703 #32981 #35286

Failed: [k8s.io] ESIPP [Slow] should work for type=LoadBalancer {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Volume Placement [Volume] should create and delete pod with the same volume source attach/detach to different worker nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] DisruptionController should create a PodDisruptionBudget {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #37017

Failed: [k8s.io] HostPath should support r/w [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #27503

Failed: [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Projected should provide container's cpu request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #35422

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #28084

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should not provision a volume in an unmanaged GCE zone. [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #31918

Failed: [k8s.io] Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 24 14:29:02.051: Couldn't delete ns: "e2e-tests-pv-9ffqb": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-pv-9ffqb/statefulsets\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get statefulsets.apps) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-pv-9ffqb/statefulsets\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get statefulsets.apps)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc4225040a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:274

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:105
Expected error:
    <*errors.errorString | 0xc4232d91e0>: {
        s: "Namespace e2e-tests-deployment-k07fw is active",
    }
    Namespace e2e-tests-deployment-k07fw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #30264

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] PodPreset should create a pod preset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] Volume Placement [Volume] should create and delete pod with multiple volumes from same datastore {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:190

Issues about this test specifically: #33883

Failed: [k8s.io] Volume Disk Format [Volumes] verify disk format type - eagerzeroedthick is honored for dynamically provisioned pv using storageclass {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Expected error:
    <*errors.errorString | 0xc420417940>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2146/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 24 21:10:02.814: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8:
 container "runtime": expected RSS memory (MB) < 314572800; got 408084480
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 24 22:15:00.813: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82640896, container "runtime": expected RSS memory (MB) < 131072000; got 174796800
node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 77422592
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 24 15:52:51.554: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2147/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 25 04:11:04.189: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfnt:
 container "kubelet": expected RSS memory (MB) < 73400320; got 78962688
node gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8:
 container "kubelet": expected RSS memory (MB) < 73400320; got 86564864, container "runtime": expected RSS memory (MB) < 131072000; got 192147456
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 24 23:01:24.037: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c94ad8f-tfc8:
 container "runtime": expected RSS memory (MB) < 314572800; got 412180480
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 25 01:58:30.787: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-xg08
not to equal
    <string>: gke-bootstrap-e2e-default-pool-1c94ad8f-xg08
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2153/
Multiple broken tests:

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-d5710f40-xs9t
not to equal
    <string>: gke-bootstrap-e2e-default-pool-d5710f40-xs9t
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 27 01:28:04.041: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-d5710f40-qbtl:
 container "kubelet": expected RSS memory (MB) < 73400320; got 75489280
node gke-bootstrap-e2e-default-pool-d5710f40-t290:
 container "kubelet": expected RSS memory (MB) < 73400320; got 79351808
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
Apr 26 22:12:00.818: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 27 00:14:56.027: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-d5710f40-qbtl:
 container "runtime": expected RSS memory (MB) < 314572800; got 318734336
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2154/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 27 09:34:45.751: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-d5710f40-t290:
 container "kubelet": expected RSS memory (MB) < 73400320; got 81289216
node gke-bootstrap-e2e-default-pool-d5710f40-xs9t:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73515008
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 27 08:12:28.788: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-d5710f40-t290:
 container "runtime": expected RSS memory (MB) < 314572800; got 317984768
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-d5710f40-xs9t
not to equal
    <string>: gke-bootstrap-e2e-default-pool-d5710f40-xs9t
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2155/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 27 17:03:40.936: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-d5710f40-xs9t:
 container "kubelet": expected RSS memory (MB) < 73400320; got 75456512
node gke-bootstrap-e2e-default-pool-d5710f40-s0cx:
 container "kubelet": expected RSS memory (MB) < 73400320; got 73461760
node gke-bootstrap-e2e-default-pool-d5710f40-t290:
 container "kubelet": expected RSS memory (MB) < 73400320; got 82206720
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-d5710f40-xs9t
not to equal
    <string>: gke-bootstrap-e2e-default-pool-d5710f40-xs9t
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Apr 27 13:24:30.249: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-d5710f40-t290:
 container "runtime": expected RSS memory (MB) < 314572800; got 317267968
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2156/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37435

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #34317

Failed: [k8s.io] Job should adopt matching orphans and release non-matching pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] PersistentVolumes:vsphere should test that deleting the PV before the pod does not cause pod deletion to fail on vspehre volume detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] CronJob should delete successful finished jobs with limit of one successful job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29710

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37027

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27360 #28096 #29615 #31775 #35750

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #37314

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] CronJob should remove from active list jobs that have been deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Projected should provide container's cpu limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Volume Disk Format [Volumes] verify disk format type - eagerzeroedthick is honored for dynamically provisioned pv using storageclass {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35793

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36948

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] ESIPP [Slow] should work from pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29467

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36242

Failed: [k8s.io] PodPreset should not modify the pod on conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Garbage collector should orphan pods created by rc if delete options say so {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] vsphere Volume fstype [Volume] verify disk format type - default value should be ext4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Projected should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Projected should project all components that make up the projection API [Conformance] [Volume] [Projection] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Firewall rule should have correct firewall rules for e2e cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] CronJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29828

Failed: [k8s.io] Projected should be consumable in multiple volumes in the same pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] vsphere Volume fstype [Volume] verify fstype - ext3 formatted volume {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31918

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #34250

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Multi-AZ Clusters should spread the pods of a service across zones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #34122

Failed: [k8s.io] Secrets should be consumable via the environment [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35473

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #40977

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #32644

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted. [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Volumes [Volume] [k8s.io] NFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] ESIPP [Slow] should only target nodes with endpoints {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31936

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35579

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] CronJob should adopt Jobs it owns that don't have ControllerRef yet {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36457

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #34367

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] ReplicationController should release no longer matching pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Pod Disks should be able to delete a non-existent PD without error {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should provision storage with different parameters [Slow] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35279

Failed: [k8s.io] ReplicaSet should release no longer matching pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] CronJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] PrivilegedPod should enable privileged commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29050

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #36569 #38446

Failed: [k8s.io] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.errorString | 0xc420416ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2355/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:62
May  2 09:41:21.312: Number of replicas has changed: expected 3, got 4
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:351

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:142
May  2 11:39:38.299: Pods are not spread to each node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:140

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-9a9e5ae3-nzmb
not to equal
    <string>: gke-bootstrap-e2e-default-pool-9a9e5ae3-nzmb
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2359/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:97
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #36178

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:106
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #34317

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:142
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #34064

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1022
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3912

Issues about this test specifically: #37274

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #32375

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:153
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #33887

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:133
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #34104

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:115
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
May  3 16:36:39.938: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9a9e5ae3-nzmb:
 container "runtime": expected RSS memory (MB) < 314572800; got 319422464
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:108
Expected
    <string>: gke-bootstrap-e2e-default-pool-9a9e5ae3-nrdg
to equal
    <string>: gke-bootstrap-e2e-default-pool-9a9e5ae3-pk96
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:107

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:164
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #34250

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:176
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:188
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #33285

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:468
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3912

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #32830

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:124
Expected error:
    <*errors.errorString | 0xc4202ac780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:430

Issues about this test specifically: #36271

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-9a9e5ae3-v4v2
not to equal
    <string>: gke-bootstrap-e2e-default-pool-9a9e5ae3-v4v2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
May  3 17:17:20.346: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9a9e5ae3-nzmb:
 container "kubelet": expected RSS memory (MB) < 73400320; got 80056320
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-soak-gke-test/2360/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308
Expected
    <string>: gke-bootstrap-e2e-default-pool-9a9e5ae3-v4v2
not to equal
    <string>: gke-bootstrap-e2e-default-pool-9a9e5ae3-v4v2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:307

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
May  4 02:29:44.875: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9a9e5ae3-nzmb:
 container "runtime": expected RSS memory (MB) < 314572800; got 324902912
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
May  4 04:09:46.848: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9a9e5ae3-nzmb:
 container "kubelet": expected RSS memory (MB) < 73400320; got 85917696
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

@thockin thockin removed this from the v1.6.1 milestone May 27, 2017
@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 77 days. It will be closed in 12 days (Jun 18, 2017).

cc @k8s-merge-robot @krousey

You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days

@dchen1107
Copy link
Member

The issue is staled. Close it.

@dchen1107
Copy link
Member

The issue is staled and no activity on it for a long time. Closed it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

8 participants