Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-staging: broken test run #43037

Closed
k8s-github-robot opened this issue Mar 14, 2017 · 50 comments
Closed

ci-kubernetes-e2e-gci-gke-staging: broken test run #43037

k8s-github-robot opened this issue Mar 14, 2017 · 50 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2407/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422d1c7b0>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225eb0d0>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421abc180>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42284cdd0>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d545e0>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42038c8b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #32936

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:240
Expected error:
    <*errors.StatusError | 0xc4210a2f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-node-problem-detector-0gf90/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-node-problem-detector-0gf90/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-node-problem-detector-0gf90/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:230

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422f0d3d0>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42196ad70>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420a22ae0>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240
Mar 13 11:07:37.298: Failed to delete pod "pod-service-account-3a4505b5-0817-11e7-9f26-0242ac110008-rvwnw": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-svcaccounts-kfv90/pods/pod-service-account-3a4505b5-0817-11e7-9f26-0242ac110008-rvwnw\"") has prevented the request from succeeding (delete pods pod-service-account-3a4505b5-0817-11e7-9f26-0242ac110008-rvwnw)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:118

Issues about this test specifically: #37526

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217529a0>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42284e350>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc4215fa830>: {
        s: "expected pod \"pod-configmaps-d2d05125-0810-11e7-9f26-0242ac110008\" success: gave up waiting for pod 'pod-configmaps-d2d05125-0810-11e7-9f26-0242ac110008' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-d2d05125-0810-11e7-9f26-0242ac110008" success: gave up waiting for pod 'pod-configmaps-d2d05125-0810-11e7-9f26-0242ac110008' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #27245

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4232064e0>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4209cd7e0>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42290ec70>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d388b0>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225d7a40>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4223c8260>: {
        s: "Namespace e2e-tests-svcaccounts-kfv90 is active",
    }
    Namespace e2e-tests-svcaccounts-kfv90 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Previous issues for this suite: #37152 #37329 #38077 #40676

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Mar 14, 2017
@calebamiles calebamiles modified the milestones: v1.6.1, v1.6 Mar 14, 2017
@ethernetdan
Copy link
Contributor

Suite seems stable now and will move to v1.7 but I'll keep an eye on it.

@ethernetdan ethernetdan modified the milestones: v1.7, v1.6 Mar 14, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2413/
Multiple broken tests:

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc423182010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:188
Expected error:
    <*errors.errorString | 0xc4203d39b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:101

Issues about this test specifically: #36564

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc42272c0c0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
Mar 15 11:25:14.589: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37774

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc421ae6360>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:14, Replicas:5, UpdatedReplicas:5, AvailableReplicas:4, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63625199759, nsec:0, loc:(*time.Location)(0x3cf1200)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63625199759, nsec:0, loc:(*time.Location)(0x3cf1200)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63625199767, nsec:0, loc:(*time.Location)(0x3cf1200)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63625199743, nsec:0, loc:(*time.Location)(0x3cf1200)}}, Reason:\"NewReplicaSetAvailable\", Message:\"Replica set \\\"nginx-4227388870\\\" has successfully progressed.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:14, Replicas:5, UpdatedReplicas:5, AvailableReplicas:4, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63625199759, nsec:0, loc:(*time.Location)(0x3cf1200)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63625199759, nsec:0, loc:(*time.Location)(0x3cf1200)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63625199767, nsec:0, loc:(*time.Location)(0x3cf1200)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63625199743, nsec:0, loc:(*time.Location)(0x3cf1200)}}, Reason:"NewReplicaSetAvailable", Message:"Replica set \"nginx-4227388870\" has successfully progressed."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Mar 15 12:44:21.818: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38083

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2414/
Multiple broken tests:

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421049910>: {
        s: "error restarting nodes: error running gcloud [compute --project=k8s-jkns-e2e-gci-gke-staging instances reset gke-bootstrap-e2e-default-pool-5e99b0cf-9w38 gke-bootstrap-e2e-default-pool-5e99b0cf-dgzm gke-bootstrap-e2e-default-pool-5e99b0cf-sk8p --zone=us-central1-f]; got error exit status 1, stdout \"\", stderr \"Updated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gci-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-5e99b0cf-dgzm].\\nUpdated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gci-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-5e99b0cf-sk8p].\\nERROR: (gcloud.compute.instances.reset) Some requests did not succeed:\\n - The resource 'projects/k8s-jkns-e2e-gci-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-5e99b0cf-9w38' is not ready\\n\\n\"\nstdout: \nstderr: ",
    }
    error restarting nodes: error running gcloud [compute --project=k8s-jkns-e2e-gci-gke-staging instances reset gke-bootstrap-e2e-default-pool-5e99b0cf-9w38 gke-bootstrap-e2e-default-pool-5e99b0cf-dgzm gke-bootstrap-e2e-default-pool-5e99b0cf-sk8p --zone=us-central1-f]; got error exit status 1, stdout "", stderr "Updated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gci-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-5e99b0cf-dgzm].\nUpdated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gci-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-5e99b0cf-sk8p].\nERROR: (gcloud.compute.instances.reset) Some requests did not succeed:\n - The resource 'projects/k8s-jkns-e2e-gci-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-5e99b0cf-9w38' is not ready\n\n"
    stdout: 
    stderr: 
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc422c91c90>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:113
Expected error:
    <*errors.errorString | 0xc421b59c30>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
Expected error:
    <*errors.errorString | 0xc422bfc4f0>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:171
Expected error:
    <*errors.errorString | 0xc422155df0>: {
        s: "expected pod \"downwardapi-volume-d60a9f5c-09fa-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-d60a9f5c-09fa-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-d60a9f5c-09fa-11e7-9123-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-d60a9f5c-09fa-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:196
Expected error:
    <*errors.errorString | 0xc42372cc70>: {
        s: "expected pod \"downwardapi-volume-f0f40ad3-0a13-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-f0f40ad3-0a13-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-f0f40ad3-0a13-11e7-9123-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-f0f40ad3-0a13-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
Expected error:
    <*errors.errorString | 0xc42276d5c0>: {
        s: "expected pod \"pod-secrets-df85ed84-09f5-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-df85ed84-09f5-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-df85ed84-09f5-11e7-9123-0242ac11000b" success: gave up waiting for pod 'pod-secrets-df85ed84-09f5-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:189
Expected error:
    <*errors.errorString | 0xc4236b8050>: {
        s: "expected pod \"downwardapi-volume-96371920-0a12-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-96371920-0a12-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-96371920-0a12-11e7-9123-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-96371920-0a12-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:269
Expected error:
    <*errors.errorString | 0xc42272c3e0>: {
        s: "expected pod \"pod-configmaps-64c2ca88-09f4-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-64c2ca88-09f4-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-64c2ca88-09f4-11e7-9123-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-64c2ca88-09f4-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37515

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
Expected error:
    <*errors.errorString | 0xc421b58b30>: {
        s: "expected pod \"pod-secrets-f9594ec6-09ef-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'pod-secrets-f9594ec6-09ef-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-f9594ec6-09ef-11e7-9123-0242ac11000b" success: gave up waiting for pod 'pod-secrets-f9594ec6-09ef-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #29221

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:37
Expected error:
    <*errors.errorString | 0xc421b58780>: {
        s: "expected pod \"pod-configmaps-e45c9e4f-09ee-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-e45c9e4f-09ee-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-e45c9e4f-09ee-11e7-9123-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-e45c9e4f-09ee-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #29052

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc421a7d4b0>: {
        s: "expected pod \"downwardapi-volume-33107dbb-09f8-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-33107dbb-09f8-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-33107dbb-09f8-11e7-9123-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-33107dbb-09f8-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37423

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93
Expected error:
    <*errors.errorString | 0xc421ba64c0>: {
        s: "expected pod \"pod-0887744b-09f1-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'pod-0887744b-09f1-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-0887744b-09f1-11e7-9123-0242ac11000b" success: gave up waiting for pod 'pod-0887744b-09f1-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc422ba3fc0>: {
        s: "expected pod \"pod-configmaps-f75a622b-0a05-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-f75a622b-0a05-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-f75a622b-0a05-11e7-9123-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-f75a622b-0a05-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #27245

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:154
Expected error:
    <*errors.errorString | 0xc420350720>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30352 #38166

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:64
Expected error:
    <*errors.errorString | 0xc421b762c0>: {
        s: "expected pod \"pod-configmaps-9041d523-09ed-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-9041d523-09ed-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-9041d523-09ed-11e7-9123-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-9041d523-09ed-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #35790

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc42272c130>: {
        s: "expected pod \"pod-b847aa11-09fd-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'pod-b847aa11-09fd-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-b847aa11-09fd-11e7-9123-0242ac11000b" success: gave up waiting for pod 'pod-b847aa11-09fd-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #31400

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:69
Expected error:
    <*errors.errorString | 0xc422415b90>: {
        s: "expected pod \"pod-65c82346-0a11-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'pod-65c82346-0a11-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-65c82346-0a11-11e7-9123-0242ac11000b" success: gave up waiting for pod 'pod-65c82346-0a11-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #36183

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:58
Expected error:
    <*errors.errorString | 0xc422b030d0>: {
        s: "expected pod \"downwardapi-volume-aae1d164-0a10-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'downwardapi-volume-aae1d164-0a10-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-aae1d164-0a10-11e7-9123-0242ac11000b" success: gave up waiting for pod 'downwardapi-volume-aae1d164-0a10-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #36300

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:68
Expected error:
    <*errors.errorString | 0xc421f9e640>: {
        s: "expected pod \"pod-configmaps-b361ed18-09eb-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'pod-configmaps-b361ed18-09eb-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-b361ed18-09eb-11e7-9123-0242ac11000b" success: gave up waiting for pod 'pod-configmaps-b361ed18-09eb-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc422cac090>: {
        s: "expected pod \"pod-e522acb0-0a0f-11e7-9123-0242ac11000b\" success: gave up waiting for pod 'pod-e522acb0-0a0f-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-e522acb0-0a0f-11e7-9123-0242ac11000b" success: gave up waiting for pod 'pod-e522acb0-0a0f-11e7-9123-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2417/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc4203ace70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #33887

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Expected error:
    <*errors.errorString | 0xc4203ace70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3612

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc4203ace70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #32684 #36278 #37948

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc4203ace70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #33285

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc4203ace70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #34317

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:65
Expected error:
    <*errors.errorString | 0xc4203ace70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc4203ace70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #34250

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Expected error:
    <*errors.errorString | 0xc4203ace70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #34064

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2427/
Multiple broken tests:

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:309
Expected error:
    <*errors.errorString | 0xc42038cd00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:283

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:257
Expected error:
    <*errors.errorString | 0xc42038cd00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:246

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42038cd00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1730

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:493
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.162.23 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-92tkt run -i --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --rm failure-3 -- /bin/sh -c cat && exit 42] []  0xc420afe840 Waiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-92tkt/failure-3-qdr93 to be ru

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2435/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 18:15:03.434: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e31400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 18:05:50.190: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222bf400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4208f7020>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:38:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2            gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:25 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:38:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2            gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:25 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc421a66490>: {
        s: "service verification failed for: 10.75.252.38\nexpected [service1-5m22f service1-f6vbk service1-mdkvp]\nreceived []",
    }
    service verification failed for: 10.75.252.38
    expected [service1-5m22f service1-f6vbk service1-mdkvp]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:387

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 19:23:54.392: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4229f2000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 18:08:59.007: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f77400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 16:15:53.603: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222ec000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37515

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420a04f50>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:38:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2            gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:25 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:38:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2            gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:25 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 18:54:18.657: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220f3400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 18:57:51.364: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213dc000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28067 #28378 #32692 #33256 #34654

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 19:50:32.804: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421aa6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 17:59:31.896: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42162f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38511

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Expected error:
    <*errors.errorString | 0xc420352ca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36271

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 17:55:58.437: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421342a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 17:52:08.588: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213dc000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34827

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 16:35:39.053: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422038000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f5c060>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:38:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2            gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:25 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:38:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2            gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:25 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 19:20:40.554: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421aaca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 18:41:45.803: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42153e000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 17:17:54.969: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42213e000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 18:33:45.310: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215e6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 18:18:17.048: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421beca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 16:22:26.523: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211cf400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc42192a970>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 19:01:42.539: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421343400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 19:14:08.902: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421343400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37027

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 16:32:22.806: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ad2a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 16:25:50.024: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421efca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 18:37:27.837: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214ae000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d5beb0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:38:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2            gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:25 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:38:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2            gke-bootstrap-e2e-default-pool-b9bcb7a4-6ht2 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:25 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-22 15:37:23 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 19:54:44.593: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f8b400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28084

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 19:47:00.151: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f22a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 17:39:47.214: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42104f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 19:17:26.935: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420df8a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 18:02:38.399: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e30a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 17:00:04.958: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421845400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 16:42:13.840: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216fb400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 16:29:05.597: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f08a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 17:36:20.386: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211b8000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33985

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 16:19:06.815: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220bd400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37500

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 22 18:30:33.752: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42104f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2437/
Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422b55300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-c4flr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-c4flr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-c4flr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4230e9780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-ssh-hn3m4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-ssh-hn3m4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-ssh-hn3m4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422e10080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-fmppq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-fmppq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-fmppq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #30263

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc42311d680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-v1job-fvv5z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-fvv5z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-fvv5z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422b44980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-99t8v/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-99t8v/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-99t8v/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421da4100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-proxy-8q7d4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-8q7d4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-8q7d4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36242

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421d94c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-q74d0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-q74d0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-q74d0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #38174

Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421c1a700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-dns-autoscaling-bc72d/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-dns-autoscaling-bc72d/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-dns-autoscaling-bc72d/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36569 #38446

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc420664280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-7n5wj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-7n5wj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-7n5wj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc42164df00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-55clz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-55clz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-55clz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422073b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-cj86r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-cj86r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-cj86r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36564

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422489500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-8rffv/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-8rffv/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-8rffv/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36271

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421c5e280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-fgmpc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-fgmpc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-fgmpc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36178

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421f9df80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-jbpq9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-jbpq9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-jbpq9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421ce1280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-hxlz2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-hxlz2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-hxlz2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34064

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422eee300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-daemonrestart-8ttk9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-daemonrestart-8ttk9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-daemonrestart-8ttk9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421dd7e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-network-htxnj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-network-htxnj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-network-htxnj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc42116c000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-m16tm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-m16tm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-m16tm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422919080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-init-container-5q71m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-5q71m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-5q71m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #31936

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4216d2e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-container-probe-jrjrc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-jrjrc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-jrjrc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28084

Failed: [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4230e9780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-replicaset-dl34p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replicaset-dl34p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replicaset-dl34p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36554

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422918a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-var-expansion-fk9z8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-fk9z8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-fk9z8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421dd6c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-7w5zm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-7w5zm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-7w5zm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4216d3d80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-wrapper-dm47w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-wrapper-dm47w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-wrapper-dm47w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:74
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc421931600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-container-probe-7bz90/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-container-probe-7bz90/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-container-probe-7bz90/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:60

Issues about this test specifically: #29521

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421f8ba00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-namespaces-8hn71/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-namespaces-8hn71/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-namespaces-8hn71/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422dc9f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-8h767/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-8h767/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-8h767/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #35790

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422ef1500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-statefulset-9tl3n/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-statefulset-9tl3n/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-statefulset-9tl3n/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #38439

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc423692d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-9p0z3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-9p0z3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-9p0z3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] Sysctls should support unsafe sysctls which are actually whitelisted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421a7d500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-sysctl-lspr2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sysctl-lspr2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sysctl-lspr2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4202b6280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-namespaces-03h8t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-namespaces-03h8t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-namespaces-03h8t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27957

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc420321f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pod-disks-tx2pz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pod-disks-tx2pz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pod-disks-tx2pz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422eee300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-clientset-g6c8z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-clientset-g6c8z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-clientset-g6c8z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #42724

Failed: [k8s.io] Multi-AZ Clusters should spread the pods of a replication controller across zones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422bac100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-multi-az-vk6sq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-multi-az-vk6sq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-multi-az-vk6sq/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34247

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421368480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-5lqmd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-5lqmd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-5lqmd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4226b0000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-5jrhp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-5jrhp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-5jrhp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 23 10:53:39.676: Couldn't delete ns: "e2e-tests-replicaset-fw382": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-replicaset-fw382/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-replicaset-fw382/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc4225c7180), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32023

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc423692380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-54212/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-54212/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-54212/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc42365e300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-resourcequota-2vk5t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-2vk5t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                       

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2440/
Multiple broken tests:

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4211cc280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-3jth4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-3jth4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-3jth4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421f35a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-g3195/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-g3195/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-g3195/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #29052

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422300a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-e2e-kubelet-etc-hosts-9nxwk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-e2e-kubelet-etc-hosts-9nxwk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-e2e-kubelet-etc-hosts-9nxwk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #37502

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4225b9600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-z0k5s/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-z0k5s/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-z0k5s/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #29710

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4221dea80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-chhhb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-chhhb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-chhhb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4223ee500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-k7kwj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-k7kwj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-k7kwj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4217e7580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-disruption-dcv0v/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-disruption-dcv0v/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-disruption-dcv0v/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc42145c880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-z88mj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-z88mj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-z88mj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4225b8a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-namespaces-v1tl8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-namespaces-v1tl8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-namespaces-v1tl8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27957

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421fe0a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-j5vkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-j5vkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-j5vkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc42100b680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-mf94n/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-mf94n/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-mf94n/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #37515

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421742f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-52nqx/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-52nqx/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-52nqx/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422860080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-gd2bw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-gd2bw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-gd2bw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #34250

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421c5ab00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-wrapper-kxmm0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-wrapper-kxmm0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-wrapper-kxmm0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4208b5c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-mc37c/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-mc37c/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-mc37c/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #35473

Failed: [k8s.io] Mesos schedules pods annotated with roles on correct slaves {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421b74280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-3wq0w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-3wq0w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-3wq0w/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421e50000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-55rth/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-55rth/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-55rth/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #37439

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc420911800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-clientset-7tkw3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-clientset-7tkw3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-clientset-7tkw3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #42724

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4208c8000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-794g8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-794g8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-794g8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4217d3100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-1xzkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-1xzkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-1xzkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27079

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421850280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-65nm3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-65nm3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-65nm3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421246580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-203z1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-203z1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-203z1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421e80b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-qrj5p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-qrj5p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-qrj5p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36300

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422713580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-job-820vr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-820vr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-job-820vr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421743980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-40v7z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-40v7z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-40v7z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4208b5280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-sched-pred-k31j6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-k31j6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-k31j6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421fe1300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-ffs6h/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-ffs6h/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-ffs6h/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc42133be80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-0pw2p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-0pw2p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-0pw2p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc420f45000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-replication-controller-kwxcf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-kwxcf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-kwxcf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc422697500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-proxy-f47dr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-f47dr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-f47dr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421fc3600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-sched-pred-ftq6f/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-ftq6f/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-ftq6f/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421ffc100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-restart-s8j0k/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-restart-s8j0k/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-restart-s8j0k/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] DisruptionController should create a PodDisruptionBudget {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4224e4980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-disruption-hmvpr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-disruption-hmvpr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-disruption-hmvpr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #37017

Failed: [k8s.io] Sysctls should support sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc42150e680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-sysctl-wrb2r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sysctl-wrb2r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sysctl-wrb2r/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc42145df00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-daemonsets-cjwtf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-daemonsets-cjwtf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-daemonsets-cjwtf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #35277

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4202b5580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-j466m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-j466m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-j466m/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4225e6e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pod-network-test-j81q5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pod-network-test-j81q5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pod-network-test-j81q5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #32830

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4202b5980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-secrets-nqz7s/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-nqz7s/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-nqz7s/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc421ffce00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-6641d/serviceaccounts?fieldSelecto

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2444/
Multiple broken tests:

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 09:09:48.093: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f1c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32023

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 07:52:16.369: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42148b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 08:54:54.244: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421468c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31873

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 07:28:26.819: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420bc7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 07:36:01.723: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420712c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 06:13:33.287: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420235678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc421d64030>: {
        s: "timeout waiting 10m0s for cluster size to be 2",
    }
    timeout waiting 10m0s for cluster size to be 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:308

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42038c7b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 11:00:55.306: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421432a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 10:41:05.975: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42208aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 05:55:34.328: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421328278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26780

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 06:27:39.152: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420dea278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 05:58:47.675: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f98c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35793

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 09:28:53.576: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421265678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31408

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 06:47:14.829: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421692278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 05:35:28.441: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217f7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 06:53:42.189: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421468c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 07:48:14.149: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421328c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37525

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 07:41:46.506: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217f7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 06:37:19.601: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42184e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 10:03:12.357: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c99678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37515

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 11:13:35.346: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421234000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27195

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 10:47:45.091: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421896000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 06:34:05.077: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b94c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 10:54:14.325: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421745400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 09:37:33.415: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217ad678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37914

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 08:04:38.157: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d58c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 06:40:36.607: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421670c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 09:13:01.715: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217f7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 10:57:28.365: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fcca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 08:58:23.303: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42205d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 07:32:18.444: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f1cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Expected error:
    <*errors.errorString | 0xc42038c7b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b95b80>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-jxnhg                                 gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:56 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v            gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:27 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-jxnhg                                 gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:56 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v            gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:27 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c18f90>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-jxnhg                                 gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:56 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v            gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:27 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-jxnhg                                 gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:56 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v            gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:27 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 11:28:13.437: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42148b400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 08:37:30.938: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f3f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e12f40>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-jxnhg                                 gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:56 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v            gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:27 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-jxnhg                                 gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:56 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v            gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:27 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc42038c7b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36178

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 05:45:26.733: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d8e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 07:21:53.445: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421738278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 07:45:00.270: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c0ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c14a40>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-jxnhg                                 gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:56 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v            gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:27 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-jxnhg                                 gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:56 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v            gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:27 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 10:44:33.545: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a0f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 08:48:07.477: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42179cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28084

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 06:43:56.615: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214ca278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 05:29:01.082: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421862278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 09:05:57.894: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421710278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37274

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 05:52:20.717: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217f0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 05:38:56.291: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a56278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 05:12:02.966: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fccc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 10:51:03.145: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42147e000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 09:02:44.691: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218aa278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 10:37:37.746: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211baa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4202c8a20>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-jxnhg                                 gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:56 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v            gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:27 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-jxnhg                                 gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:56 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:37:39 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v            gke-bootstrap-e2e-default-pool-49ceb6d1-dx1v Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:27 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-25 03:35:26 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 09:24:40.771: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f1e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 05:32:16.752: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ad8278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35579

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 08:40:44.489: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420955678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29461

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 08:51:39.607: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215d0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 05:48:46.395: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f98c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29657

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 11:21:47.627: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212c7400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 25 06:30:53.592: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218bac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2456/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc4225ec800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-8zfnn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-8zfnn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-8zfnn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:148
Error creating Pod
Expected error:
    <*errors.StatusError | 0xc422788980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-container-probe-h57t6/pods\\\"\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-container-probe-h57t6/pods\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-container-probe-h57t6/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:60

Issues about this test specifically: #37914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42155e5b0>: {
        s: "Namespace e2e-tests-kubectl-8zfnn is active",
    }
    Namespace e2e-tests-kubectl-8zfnn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
failed to get pod
Expected error:
    <*errors.StatusError | 0xc421537e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get pods host-test-container-pod)",
            Reason: "Unauthorized",
            Details: {
                Name: "host-test-container-pod",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get pods host-test-container-pod)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:122

Issues about this test specifically: #33285

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203ab0d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #28106 #35197 #37482

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2461/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
Expected error:
    <*errors.errorString | 0xc421041650>: {
        s: "expected pod \"pod-10f35831-14b8-11e7-acf9-0242ac11000b\" success: gave up waiting for pod 'pod-10f35831-14b8-11e7-acf9-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-10f35831-14b8-11e7-acf9-0242ac11000b" success: gave up waiting for pod 'pod-10f35831-14b8-11e7-acf9-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113
Expected error:
    <*errors.errorString | 0xc4226f6220>: {
        s: "expected pod \"pod-584d2ffa-14bd-11e7-acf9-0242ac11000b\" success: gave up waiting for pod 'pod-584d2ffa-14bd-11e7-acf9-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-584d2ffa-14bd-11e7-acf9-0242ac11000b" success: gave up waiting for pod 'pod-584d2ffa-14bd-11e7-acf9-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #34226

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:117
Expected error:
    <*errors.errorString | 0xc4226c31c0>: {
        s: "expected pod \"pod-df9e052b-14bf-11e7-acf9-0242ac11000b\" success: gave up waiting for pod 'pod-df9e052b-14bf-11e7-acf9-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-df9e052b-14bf-11e7-acf9-0242ac11000b" success: gave up waiting for pod 'pod-df9e052b-14bf-11e7-acf9-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc422b17030>: {
        s: "expected pod \"pod-d188d596-14c4-11e7-acf9-0242ac11000b\" success: gave up waiting for pod 'pod-d188d596-14c4-11e7-acf9-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-d188d596-14c4-11e7-acf9-0242ac11000b" success: gave up waiting for pod 'pod-d188d596-14c4-11e7-acf9-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #30851

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2463/
Multiple broken tests:

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:40
Expected error:
    <*errors.errorString | 0xc421c049a0>: {
        s: "expected pod \"pod-secrets-60becc45-152f-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-secrets-60becc45-152f-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-60becc45-152f-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-secrets-60becc45-152f-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #35256

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc420d4a8f0>: {
        s: "expected pod \"downwardapi-volume-9b7c8f7e-1518-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'downwardapi-volume-9b7c8f7e-1518-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-9b7c8f7e-1518-11e7-bd32-0242ac110004" success: gave up waiting for pod 'downwardapi-volume-9b7c8f7e-1518-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37423

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
Expected error:
    <*errors.errorString | 0xc42074fe60>: {
        s: "expected pod \"pod-secrets-d857ea3b-1517-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-secrets-d857ea3b-1517-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-d857ea3b-1517-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-secrets-d857ea3b-1517-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #29221

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:105
Expected error:
    <*errors.errorString | 0xc421b731c0>: {
        s: "expected pod \"pod-d0b820d0-1520-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-d0b820d0-1520-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-d0b820d0-1520-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-d0b820d0-1520-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #26780

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:37
Expected error:
    <*errors.errorString | 0xc4228592b0>: {
        s: "expected pod \"pod-configmaps-84ef8b9f-1533-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-configmaps-84ef8b9f-1533-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-84ef8b9f-1533-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-configmaps-84ef8b9f-1533-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #29052

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc421fb8d00>: {
        s: "expected pod \"pod-10da5d55-1528-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-10da5d55-1528-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-10da5d55-1528-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-10da5d55-1528-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37500

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:101
Expected error:
    <*errors.errorString | 0xc42118f280>: {
        s: "expected pod \"pod-d9ba1c8f-1514-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-d9ba1c8f-1514-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-d9ba1c8f-1514-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-d9ba1c8f-1514-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37439

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:150
Expected error:
    <*errors.errorString | 0xc421b72f20>: {
        s: "expected pod \"pod-secrets-8235aa00-1524-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-secrets-8235aa00-1524-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-8235aa00-1524-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-secrets-8235aa00-1524-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:64
Expected error:
    <*errors.errorString | 0xc4226f82c0>: {
        s: "expected pod \"pod-configmaps-9a53dba1-1530-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-configmaps-9a53dba1-1530-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-9a53dba1-1530-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-configmaps-9a53dba1-1530-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #35790

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc4203accb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:77
Expected error:
    <*errors.errorString | 0xc42260b2d0>: {
        s: "expected pod \"pod-secrets-5c2af846-152a-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-secrets-5c2af846-152a-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-5c2af846-152a-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-secrets-5c2af846-152a-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37525

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:59
Expected error:
    <*errors.errorString | 0xc4222042d0>: {
        s: "expected pod \"pod-configmaps-6c4a0638-1532-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-configmaps-6c4a0638-1532-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-6c4a0638-1532-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-configmaps-6c4a0638-1532-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #32949

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc42201ca00>: {
        s: "expected pod \"pod-configmaps-db6920df-151b-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-configmaps-db6920df-151b-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-db6920df-151b-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-configmaps-db6920df-151b-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #27245

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
Expected error:
    <*errors.errorString | 0xc421f0ea60>: {
        s: "expected pod \"pod-secrets-9cf9b0cb-152d-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-secrets-9cf9b0cb-152d-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-9cf9b0cb-152d-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-secrets-9cf9b0cb-152d-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124
Expected error:
    <*errors.errorString | 0xc4203accb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:154
Expected error:
    <*errors.errorString | 0xc4203accb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30352 #38166

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc421d8e390>: {
        s: "expected pod \"pod-8c4258a3-152e-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-8c4258a3-152e-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-8c4258a3-152e-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-8c4258a3-152e-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #31400

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc421e60fd0>: {
        s: "expected pod \"pod-9a972af3-151c-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-9a972af3-151c-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-9a972af3-151c-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-9a972af3-151c-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:42
Expected error:
    <*errors.errorString | 0xc42200e800>: {
        s: "expected pod \"pod-configmaps-5618b5ac-1519-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-configmaps-5618b5ac-1519-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-5618b5ac-1519-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-configmaps-5618b5ac-1519-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #34827

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:493
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.151.92 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-7z6qb run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never success -- /bin/sh -c exit 0] []  <nil> Waiting for pod e2e-tests-kubectl-7z6qb/success to be running, status is Pending, pod ready: false\n  [] <nil> 0xc422dcbd70 exit status 1 <nil> <nil> true [0xc422246418 0xc422246430 0xc422246448] [0xc422246418 0xc422246430 0xc422246448] [0xc422246428 0xc422246440] [0x9747f0 0x9747f0] 0xc4216e3f20 <nil>}:\nCommand stdout:\nWaiting for pod e2e-tests-kubectl-7z6qb/success to be running, status is Pending, pod ready: false\n\nstderr:\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.151.92 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-7z6qb run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never success -- /bin/sh -c exit 0] []  <nil> Waiting for pod e2e-tests-kubectl-7z6qb/success to be running, status is Pending, pod ready: false
      [] <nil> 0xc422dcbd70 exit status 1 <nil> <nil> true [0xc422246418 0xc422246430 0xc422246448] [0xc422246418 0xc422246430 0xc422246448] [0xc422246428 0xc422246440] [0x9747f0 0x9747f0] 0xc4216e3f20 <nil>}:
    Command stdout:
    Waiting for pod e2e-tests-kubectl-7z6qb/success to be running, status is Pending, pod ready: false
    
    stderr:
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:51
Expected error:
    <*errors.errorString | 0xc420d47360>: {
        s: "expected pod \"pod-secrets-ae7db3a9-150e-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-secrets-ae7db3a9-150e-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-ae7db3a9-150e-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-secrets-ae7db3a9-150e-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:68
Expected error:
    <*errors.errorString | 0xc42201dea0>: {
        s: "expected pod \"pod-configmaps-afca5813-151a-11e7-bd32-0242ac110004\" success: gave up waiting for pod 'pod-configmaps-afca5813-151a-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-afca5813-151a-11e7-bd32-0242ac110004" success: gave up waiting for pod 'pod-configmaps-afca5813-151a-11e7-bd32-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2473/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203f90d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*errors.errorString | 0xc4203f90d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203f90d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450 #43094

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2474/
Multiple broken tests:

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 17:57:03.894: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212b2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:26:27.104: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c54278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 19:07:50.156: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421da4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35277

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:20:02.191: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202cb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 14:40:08.319: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222fec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203d2fd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 17:16:36.690: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422066c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 19:14:16.472: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42203d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37439

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 16:10:05.150: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421290c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 18:44:39.052: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420663678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 16:29:46.848: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213bac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37500

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 16:49:41.623: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42226cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 16:26:33.087: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212bcc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Expected error:
    <*errors.errorString | 0xc4211dba20>: {
        s: "at least one node failed to be ready",
    }
    at least one node failed to be ready
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:397

Issues about this test specifically: #37373

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212f8e30>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:42:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:43:22 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:42:07 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh            gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:42:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:42:18 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:42:07 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:42:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:43:22 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:42:07 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh            gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:42:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:42:18 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 11:42:07 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 18:24:58.196: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421906c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 17:06:38.957: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421135678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Expected error:
    <*errors.errorString | 0xc4203d2fd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 16:52:56.969: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212bcc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 18:00:18.070: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42189c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:16:28.343: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c54c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:56:26.836: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420722278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 17:13:22.976: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e8cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:13:07.228: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213e6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 14:36:53.640: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b24c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:30:25.394: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421adf678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31408

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 16:33:00.126: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42227b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 14:47:11.992: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421343678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30632

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:49:22.195: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42109e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:53:14.312: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211bb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 16:59:53.878: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42167d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 18:07:01.180: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421220c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:09:53.258: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42144b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 19:25:20.175: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202cb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 17:48:16.561: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42159ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38254

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 14:43:34.878: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215a5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 19:11:02.665: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a1b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36948

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 16:03:32.407: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a1b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 19:17:30.021: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a5e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc422391850>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 19:04:20.352: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422243678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 17:51:30.300: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422273678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34827

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Expected error:
    <*errors.errorString | 0xc4203d2fd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34064

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 14:33:23.900: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420884c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32945

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:35:12.528: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213be278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 17:19:50.112: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d94278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:46:04.606: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421291678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 18:03:46.946: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42206f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 15:23:13.535: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212de278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 18:28:24.843: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421078c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh\" is not ready yet",
        },
    ]
    Resource usage on node "gke-bootstrap-e2e-default-pool-4d3eb25c-9qzh" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 16:00:15.567: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c36c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26191

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 16:56:40.499: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420730c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 18:41:05.507: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212bcc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  2 17:03:12.308: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421efe278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc4203d2fd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33285

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2478/
Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Apr  4 01:07:21.471: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38439

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.errorString | 0xc421cdde10>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626894181, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626894181, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626894298, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626894298, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-3837372172\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626894181, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626894181, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626894298, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626894298, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-3837372172\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1335

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc4238fdce0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:20, Replicas:5, UpdatedReplicas:4, AvailableReplicas:3, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626896153, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626896153, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626896218, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626896218, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-3861566869\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:20, Replicas:5, UpdatedReplicas:4, AvailableReplicas:3, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626896153, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626896153, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63626896218, nsec:0, loc:(*time.Location)(0x3f61360)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63626896218, nsec:0, loc:(*time.Location)(0x3f61360)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-3861566869\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42037ac90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2481/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  5 00:25:10.060: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421263400), (*api.Node)(0xc421263678), (*api.Node)(0xc4212638f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:137
Expected error:
    <*errors.errorString | 0xc42038ac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:124

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Expected error:
    <*errors.errorString | 0xc421cf6050>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1579

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42038ac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:77
Expected error:
    <*errors.errorString | 0xc42181e600>: {
        s: "expected pod \"pod-secrets-ab15e6be-19b1-11e7-93ad-0242ac110002\" success: gave up waiting for pod 'pod-secrets-ab15e6be-19b1-11e7-93ad-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-ab15e6be-19b1-11e7-93ad-0242ac110002" success: gave up waiting for pod 'pod-secrets-ab15e6be-19b1-11e7-93ad-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37525

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  4 21:03:41.381: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211ec000), (*api.Node)(0xc4211ec278), (*api.Node)(0xc4211ec4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812
Apr  4 22:25:17.655: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  5 00:21:52.592: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421676000), (*api.Node)(0xc421676278), (*api.Node)(0xc4216764f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38516

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  4 21:28:21.899: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421418a00), (*api.Node)(0xc421418c78), (*api.Node)(0xc421418ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f30220>: {
        s: "11 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nThere are too many bad pods. Please check log for details.",
    }
    11 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    There are too many bad pods. Please check log for details.
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc421cf6000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  4 20:52:16.672: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d8ca00), (*api.Node)(0xc421d8cc78), (*api.Node)(0xc421d8cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc42181e270>: {
        s: "expected pod \"pod-configmaps-499dc46d-19b3-11e7-93ad-0242ac110002\" success: gave up waiting for pod 'pod-configmaps-499dc46d-19b3-11e7-93ad-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-499dc46d-19b3-11e7-93ad-0242ac110002" success: gave up waiting for pod 'pod-configmaps-499dc46d-19b3-11e7-93ad-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #27245

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:188
Expected error:
    <*errors.errorString | 0xc42038ac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:101

Issues about this test specifically: #36564

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc42038ac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  5 02:32:23.447: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217fe000), (*api.Node)(0xc4217fe278), (*api.Node)(0xc4217fe4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:43
Expected error:
    <*errors.errorString | 0xc421d254b0>: {
        s: "expected pod \"client-containers-7eaf4a73-19c5-11e7-93ad-0242ac110002\" success: gave up waiting for pod 'client-containers-7eaf4a73-19c5-11e7-93ad-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-7eaf4a73-19c5-11e7-93ad-0242ac110002" success: gave up waiting for pod 'client-containers-7eaf4a73-19c5-11e7-93ad-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #36706

Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:247
Expected error:
    <*errors.errorString | 0xc421cf7000>: {
        s: "Only 0 pods started out of 10",
    }
    Only 0 pods started out of 10
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:215

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:116
Did not get a good sample size: []
Less than two runs succeeded; aborting.
Not all RC/pod/service trials succeeded: Only 0 pods started out of 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:87

Issues about this test specifically: #30632

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:148
starting pod liveness-exec in namespace e2e-tests-container-probe-47b71
Expected error:
    <*errors.errorString | 0xc42038ac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:364

Issues about this test specifically: #37914

Failed: [k8s.io] DisruptionController should create a PodDisruptionBudget {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  4 23:17:13.439: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209a2000), (*api.Node)(0xc4209a2278), (*api.Node)(0xc4209a24f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37017

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1253
Apr  5 01:59:20.861: Failed getting pod e2e-test-nginx-pod: Timeout while waiting for pods with labels "run=e2e-test-nginx-pod" to be running
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1231

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Expected error:
    <*errors.errorString | 0xc421d24600>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:352

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:54
Expected error:
    <*errors.errorString | 0xc42038ac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #37502

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113
Expected error:
    <*errors.errorString | 0xc42179abe0>: {
        s: "expected pod \"pod-0bcf6475-19d9-11e7-93ad-0242ac110002\" success: gave up waiting for pod 'pod-0bcf6475-19d9-11e7-93ad-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-0bcf6475-19d9-11e7-93ad-0242ac110002" success: gave up waiting for pod 'pod-0bcf6475-19d9-11e7-93ad-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #34226

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Expected error:
    <*errors.errorString | 0xc42038ac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36271

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:152
Expected error:
    <*errors.errorString | 0xc42179a550>: {
        s: "gave up waiting for pod 'pvc-volume-tester-8f13j' to be 'success or failure' after 15m0s",
    }
    gave up waiting for pod 'pvc-volume-tester-8f13j' to be 'success or failure' after 15m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:232

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc421d257a0>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Apr  4 21:36:45.732: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1995

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <errors.aggregate | len:3, cap:4>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-f4976fe7-4lxv\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-f4976fe7-97hg\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-f4976fe7-z3gc\" is not ready yet",
        },
    ]
    [Resource usage on node "gke-bootstrap-e2e-default-pool-f4976fe7-4lxv" is not ready yet, Resource usage on node "gke-bootstrap-e2e-default-pool-f4976fe7-97hg" is not ready yet, Resource usage on node "gke-bootstrap-e2e-default-pool-f4976fe7-z3gc" is not ready yet]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:34
Expected error:
    <*errors.errorString | 0xc421d24f00>: {
        s: "expected pod \"client-containers-fafc5844-19c7-11e7-93ad-0242ac110002\" success: gave up waiting for pod 'client-containers-fafc5844-19c7-11e7-93ad-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-fafc5844-19c7-11e7-93ad-0242ac110002" success: gave up waiting for pod 'client-containers-fafc5844-19c7-11e7-93ad-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #34520

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421cf6fb0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-f4976fe7-4lxv boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-f4976fe7-4lxv boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  5 02:07:49.561: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217de000), (*api.Node)(0xc4217de278), (*api.Node)(0xc4217de4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27957

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  4 21:31:33.226: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421448000), (*api.Node)(0xc421448278), (*api.Node)(0xc4214484f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:166
Expected error:
    <*errors.errorString | 0xc42038ac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:153

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:57
Expected
    <[]api.Node | len:0, cap:0>: nil
not to be empty
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:53

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Apr  4 23:38:44.213: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38254

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:261
Expected error:
    <*errors.errorString | 0xc42038ac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:202

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421548ec0>: {
        s: "11 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nThere are too many bad pods. Please check log for details.",
    }
    11 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    There are too many bad pods. Please check log for details.
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc42038ac50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  5 00:07:07.896: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210c8a00), (*api.Node)(0xc4210c8c78), (*api.Node)(0xc4210c8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2492/
Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Apr  5 23:19:06.832: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38439

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812
Apr  5 23:05:15.823: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
Apr  5 21:24:46.141: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37774

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:68
Expected error:
    <*errors.errorString | 0xc4222c6120>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:371

Issues about this test specifically: #27232

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:351
Expected error:
    <*errors.errorString | 0xc420446ec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36649

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Apr  6 00:04:15.860: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38254

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:604
Apr  5 18:31:58.950: timed out waiting for container restart in pod=pod-back-off-image/back-off
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:598

Issues about this test specifically: #27360 #28096 #29615 #31775 #35750

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Apr  5 17:32:07.845: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:310
Expected error:
    <*errors.errorString | 0xc420446ec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #35793

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc4203dc100>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc421da4010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:477

Issues about this test specifically: #28339 #36379

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Apr  5 19:09:02.479: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:914
Apr  5 20:32:16.692: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:261
Expected error:
    <*errors.errorString | 0xc420446ec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:202

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Apr  5 18:10:32.438: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1587

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.errorString | 0xc4226b06f0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627054717, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627054717, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627054816, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627054816, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-3837372172\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627054717, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627054717, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627054816, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627054816, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-3837372172\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1335

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc42269e010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:953

Issues about this test specifically: #29629 #36270 #37462

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2508/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc420346bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc420346bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422b0e000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Expected error:
    <*errors.errorString | 0xc420346bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #32087

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422554010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc422554860>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:437
Expected error:
    <*errors.errorString | 0xc420346bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #33985

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2509/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 05:39:51.483: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b71678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:412
Apr 11 01:39:22.455: Unexpected kubectl exec output. Wanted "running in container", got ""
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:386

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 04:36:06.623: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e98c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 06:43:42.729: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421494278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 03:23:58.735: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421233678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26191

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 03:42:53.079: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219f6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28283

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 03:49:20.853: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a3cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29050

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 01:39:00.812: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ce0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 03:03:30.002: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215de278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38439

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 04:13:10.490: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e14278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 02:00:46.237: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421232c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 05:43:09.825: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c0cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 03:52:53.873: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b58c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 05:32:00.551: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c48278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc420386c40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 02:15:16.255: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a5c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 03:17:29.154: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421787678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 03:46:09.728: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206b4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 03:14:06.285: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d48278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 04:58:10.083: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42167cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc420386c40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 05:28:23.830: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42134ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27703 #32981 #35286

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216039c0>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-13119d7e-pz43 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:22:52 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-83n3n                                 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\nkube-dns-autoscaler-395097547-f43hw                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:05 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-13119d7e-pz43            gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]\nkubernetes-dashboard-3543765157-1mh8h                              gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\nl7-default-backend-2234341178-vj8c4                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-13119d7e-pz43 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:22:52 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-83n3n                                 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    kube-dns-autoscaler-395097547-f43hw                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:05 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-13119d7e-pz43            gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]
    kubernetes-dashboard-3543765157-1mh8h                              gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    l7-default-backend-2234341178-vj8c4                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 05:53:55.707: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420954278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 01:29:24.836: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421950c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 02:11:34.996: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421950c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214275f0>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-13119d7e-pz43 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:22:52 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-83n3n                                 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\nkube-dns-autoscaler-395097547-f43hw                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:05 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-13119d7e-pz43            gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]\nkubernetes-dashboard-3543765157-1mh8h                              gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\nl7-default-backend-2234341178-vj8c4                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-13119d7e-pz43 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:22:52 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-83n3n                                 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    kube-dns-autoscaler-395097547-f43hw                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:05 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-13119d7e-pz43            gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]
    kubernetes-dashboard-3543765157-1mh8h                              gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    l7-default-backend-2234341178-vj8c4                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216790c0>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-13119d7e-pz43 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:22:52 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-83n3n                                 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\nkube-dns-autoscaler-395097547-f43hw                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:05 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-13119d7e-pz43            gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]\nkubernetes-dashboard-3543765157-1mh8h                              gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\nl7-default-backend-2234341178-vj8c4                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-13119d7e-pz43 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:22:52 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-83n3n                                 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    kube-dns-autoscaler-395097547-f43hw                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:05 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-13119d7e-pz43            gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]
    kubernetes-dashboard-3543765157-1mh8h                              gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    l7-default-backend-2234341178-vj8c4                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 03:28:20.284: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421827678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 02:59:18.170: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421680278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 01:26:06.062: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ae8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 02:39:38.970: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cb2278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 02:33:05.192: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ce8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 06:15:07.545: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e42c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38174

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 06:10:40.063: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b9b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 02:07:45.664: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421532278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 07:09:52.439: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42163cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 04:29:32.275: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fc7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 04:23:10.049: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ba4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 06:40:30.910: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c16278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 04:26:21.157: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206a6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428 #40256

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 06:07:22.638: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b3cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 02:04:30.339: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b0c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc420386c40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34250

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 03:10:33.644: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c67678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 07:13:22.826: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422390278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 04:19:56.835: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421636c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31400

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 06:18:20.973: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42163c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28503

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 01:52:12.132: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cc0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 05:46:23.638: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fb4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37531

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc420386c40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34317

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 04:32:51.636: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42198ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 06:37:17.433: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fb5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37774

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f4f720>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-13119d7e-pz43 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:22:52 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-83n3n                                 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\nkube-dns-autoscaler-395097547-f43hw                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:05 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-13119d7e-pz43            gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]\nkubernetes-dashboard-3543765157-1mh8h                              gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\nl7-default-backend-2234341178-vj8c4                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-13119d7e-pz43 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:22:52 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-83n3n                                 gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    kube-dns-autoscaler-395097547-f43hw                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:23:05 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-13119d7e-pz43            gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:21:06 -0700 PDT  }]
    kubernetes-dashboard-3543765157-1mh8h                              gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    l7-default-backend-2234341178-vj8c4                                gke-bootstrap-e2e-default-pool-13119d7e-pz43 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-10 23:59:48 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 04:16:43.508: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421786278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 03:06:45.167: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421664278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29828

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 02:56:04.567: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c83678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30851

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 01:35:49.512: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ce8278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26780

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 05:36:22.116: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42134a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Expected error:
    <*errors.errorString | 0xc421c6c520>: {
        s: "at least one node failed to be ready",
    }
    at least one node failed to be ready
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:397

Issues about this test specifically: #37373

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 05:50:40.844: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e2a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 02:36:25.710: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213ba278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc42167ae60>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-13119d7e-pz43 boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-13119d7e-pz43 boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 01:57:31.075: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421950278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 06:03:54.187: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422006278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 04:39:17.916: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421006278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36242

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 01:45:47.650: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a67678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 04:45:42.559: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421888278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 11 01:48:58.721: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ce0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2514/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Apr 12 20:16:21.820: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Apr 12 20:32:36.923: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc42038ccc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: ListResources After {e2e.go}

Failed to list resources (error during ./cluster/gce/list-resources.sh: signal: interrupt):
Project: k8s-jkns-e2e-gci-gke-staging
Region: us-central1
Zone: us-central1-f
Instance prefix: gke-bootstrap-e2e
Network: bootstrap-e2e
Provider: gke


[ instance-templates ]

Issues about this test specifically: #42073 #43959

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Apr 12 19:16:08.841: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2527/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113
Expected error:
    <*errors.errorString | 0xc42212d370>: {
        s: "expected pod \"pod-9bb4cfb7-235b-11e7-ad78-0242ac110009\" success: gave up waiting for pod 'pod-9bb4cfb7-235b-11e7-ad78-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-9bb4cfb7-235b-11e7-ad78-0242ac110009" success: gave up waiting for pod 'pod-9bb4cfb7-235b-11e7-ad78-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #34226

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc422b0adf0>: {
        s: "expected pod \"pod-97e9bafe-235d-11e7-ad78-0242ac110009\" success: gave up waiting for pod 'pod-97e9bafe-235d-11e7-ad78-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-97e9bafe-235d-11e7-ad78-0242ac110009" success: gave up waiting for pod 'pod-97e9bafe-235d-11e7-ad78-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #30851

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109
Expected error:
    <*errors.errorString | 0xc420a8e050>: {
        s: "expected pod \"pod-5bad66db-2330-11e7-ad78-0242ac110009\" success: gave up waiting for pod 'pod-5bad66db-2330-11e7-ad78-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-5bad66db-2330-11e7-ad78-0242ac110009" success: gave up waiting for pod 'pod-5bad66db-2330-11e7-ad78-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37071

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:117
Expected error:
    <*errors.errorString | 0xc4223cead0>: {
        s: "expected pod \"pod-e96b6a37-2347-11e7-ad78-0242ac110009\" success: gave up waiting for pod 'pod-e96b6a37-2347-11e7-ad78-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-e96b6a37-2347-11e7-ad78-0242ac110009" success: gave up waiting for pod 'pod-e96b6a37-2347-11e7-ad78-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc4203fd140>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
Expected error:
    <*errors.errorString | 0xc422ca2f00>: {
        s: "expected pod \"pod-e551fd7f-234e-11e7-ad78-0242ac110009\" success: gave up waiting for pod 'pod-e551fd7f-234e-11e7-ad78-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-e551fd7f-234e-11e7-ad78-0242ac110009" success: gave up waiting for pod 'pod-e551fd7f-234e-11e7-ad78-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #29224 #32008 #37564

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2528/
Multiple broken tests:

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:43
Expected error:
    <*errors.errorString | 0xc4238b5cc0>: {
        s: "expected pod \"client-containers-06eefb0e-23a0-11e7-b096-0242ac110003\" success: gave up waiting for pod 'client-containers-06eefb0e-23a0-11e7-b096-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-06eefb0e-23a0-11e7-b096-0242ac110003" success: gave up waiting for pod 'client-containers-06eefb0e-23a0-11e7-b096-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #36706

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc420350da0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc421312c60>: {
        s: "error waiting for deployment \"test-rolling-update-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rolling-update-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:333

Issues about this test specifically: #31075 #36286 #38041

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2535/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Apr 19 14:51:22.294: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1995

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:423
Expected error:
    <*errors.errorString | 0xc420388d50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421ae92c0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 9, less than the min required: 18",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 9, less than the min required: 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Apr 19 18:53:55.555: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1995

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2536/
Multiple broken tests:

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42208bb40>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should accept cluster resources when the client has right authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4228342c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] iSCSI should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:516
Expected error:
    <*errors.errorString | 0xc4203bfdf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421acc4b0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34102

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420e60530>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:205
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4227ce1a0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34764

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4228172b0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Initial Resources [Feature:InitialResources] [Flaky] should set initial resources based on historical data {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/initial_resources.go:51
Expected error:
    <*errors.StatusError | 0xc4210d8b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server rejected our request for an unknown reason (post services ir-0-ctrl)",
            Reason: "BadRequest",
            Details: {
                Name: "ir-0-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "incorrect function argument",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 400,
        },
    }
    the server rejected our request for an unknown reason (post services ir-0-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:243

Failed: [k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4227e49a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:86
Node gke-bootstrap-e2e-default-pool-c1be1250-cn35 did not run out of disk within 5m0s
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:251

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should create matching services in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4227225e0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:39
Apr 20 02:42:01.963: Failed to create node pool "test-pool". Err: exit status 1
ERROR: (gcloud.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:53

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421896980>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:100
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4214c7010>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] ReplicaSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422f05e60>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4220747d0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31317 #31457

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422f99a70>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:131
Apr 20 00:03:17.205: Test failed; at least one node failed to reboot in the time given.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:169

Issues about this test specifically: #33703 #36230

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain responsive services [Feature:ExperimentalClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:117
Expected error:
    <*errors.errorString | 0xc42277dbe0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.1.681+94a5074bd6223d --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.1.681+94a5074bd6223d --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:107

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421589e20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should create and update matching deployments in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422ee9b20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Deployment objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422416b20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4224f94f0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:596
Expected error:
    <*errors.errorString | 0xc42276b540>: {
        s: "gave up waiting for pod 'write-pod-j3n3v' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-j3n3v' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Networking IPerf [Experimental] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_perf.go:153
Apr 19 23:36:25.260: Failed parsing value bandwidth port from the string '13360508550
' as an integer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util_iperf.go:102

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should create and update matching ingresses in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421b9e230>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31825 #36088

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:563
Expected error:
    <*errors.errorString | 0xc421420030>: {
        s: "gave up waiting for pod 'write-pod-4nl29' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-4nl29' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:539
Expected error:
    <*errors.errorString | 0xc421904ed0>: {
        s: "gave up waiting for pod 'write-pod-61cb3' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-61cb3' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4225ca5b0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:78
Apr 19 22:39:47.811: Failed to GC pods within 2m0s, 1000 pods remaining, error: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:76

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422293420>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc422744f60>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

Failed: [k8s.io] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc422292040>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33804

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420edb110>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] Ceph RBD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:591
Expected error:
    <*errors.errorString | 0xc4203bfdf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*exec.ExitError | 0xc42156ee20>: {
        ProcessState: {
            pid: 2496,
            status: 256,
            rusage: {
                Utime: {Sec: 0, Usec: 300000},
                Stime: {Sec: 0, Usec: 76000},
                Maxrss: 30912,
                Ixrss: 0,
                Idrss: 0,
                Isrss: 0,
                Minflt: 16828,
                Majflt: 0,
                Nswap: 0,
                Inblock: 0,
                Oublock: 32,
                Msgsnd: 0,
                Msgrcv: 0,
                Nsignals: 0,
                Nvcsw: 45,
                Nivcsw: 37,
            },
        },
        Stderr: [69, 82, 82, 79, 82, 58, 32, 40, 103, 99, 108, 111, 117, 100, 46, 97, 117, 116, 104, 46, 112, 114, 105, 110, 116, 45, 97, 99, 99, 101, 115, 115, 45, 116, 111, 107, 101, 110, 41, 32, 84, 104, 101, 114, 101, 32, 119, 97, 115, 32, 97, 32, 112, 114, 111, 98, 108, 101, 109, 32, 114, 101, 102, 114, 101, 115, 104, 105, 110, 103, 32, 121, 111, 117, 114, 32, 99, 117, 114, 114, 101, 110, 116, 32, 97, 117, 116, 104, 32, 116, 111, 107, 101, 110, 115, 58, 32, 105, 110, 116, 101, 114, 110, 97, 108, 95, 102, 97, 105, 108, 117, 114, 101, 10, 80, 108, 101, 97, 115, 101, 32, 114, 117, 110, 58, 10, 10, 32, 32, 36, 32, 103, 99, 108, 111, 117, 100, 32, 97, 117, 116, 104, 32, 108, 111, 103, 105, 110, 10, 10, 116, 111, 32, 111, 98, 116, 97, 105, 110, 32, 110, 101, 119, 32, 99, 114, 101, 100, 101, 110, 116, 105, 97, 108, 115, 44, 32, 111, 114, 32, 105, 102, 32, 121, 111, 117, 32, 104, 97, 118, 101, 32, 97, 108, 114, 101, 97, 100, 121, 32, 108, 111, 103, 103, 101, 100, 32, 105, 110, 32, 119, 105, 116, 104, 32, 97, 10, 100, 105, 102, 102, 101, 114, 101, 110, 116, 32, 97, 99, 99, 111, 117, 110, 116, 58, 10, 10, 32, 32, 36, 32, 103, 99, 108, 111, 117, 100, 32, 99, 111, 110, 102, 105, 103, 32, 115, 101, 116, 32, 97, 99, 99, 111, 117, 110, 116, 32, 65, 67, 67, 79, 85, 78, 84, 10, 10, 116, 111, 32, 115, 101, 108, 101, 99, 116, 32, 97, 110, 32, 97, 108, 114, 101, 97, 100, 121, 32, 97, 117, 116, 104, 101, 110, 116, 105, 99, 97, 116, 101, 100, 32, 97, 99, 99, 111, 117, 110, 116, 32, 116, 111, 32, 117, 115, 101, 46, 10],
    }
    exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:323

Issues about this test specifically: #33891 #43360

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:605
Expected error:
    <*errors.errorString | 0xc421904130>: {
        s: "gave up waiting for pod 'write-pod-ndtjh' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-ndtjh' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:547
Expected error:
    <*errors.errorString | 0xc422362250>: {
        s: "gave up waiting for pod 'write-pod-hfkvp' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-hfkvp' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should not be deleted from underlying clusters when it is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42277cb60>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4218dadd0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Expected error:
    <*errors.errorString | 0xc42321a3d0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.1.681+94a5074bd6223d --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.1.681+94a5074bd6223d --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:45

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:614
Expected error:
    <*errors.errorString | 0xc4213eab90>: {
        s: "gave up waiting for pod 'write-pod-ghfzz' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-ghfzz' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2537/
Multiple broken tests:

Failed: [k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:78
Apr 20 07:06:47.037: Failed to GC pods within 2m0s, 1000 pods remaining, error: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:76

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4234cec60>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services DNS should be able to discover a federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420c29150>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421fee4a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Deployment objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422a3b3a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421e22460>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Opaque resources [Feature:OpaqueResources] should account opaque integer resources in pods with multiple containers. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/opaque_resource.go:62
Expected error:
    <*errors.errorString | 0xc42038ccf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/opaque_resource.go:256

Failed: [k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420c79500>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostPID {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:108
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain a functioning cluster [Feature:NodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:69
Expected error:
    <*errors.errorString | 0xc421e9cb60>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.1.683+2c6fbc95c43b9f --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.1.683+2c6fbc95c43b9f --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:61

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420c04110>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:52
Expected error:
    <*errors.errorString | 0xc421492660>: {
        s: "expected pod \"pod-9b6520dc-25d9-11e7-897d-0242ac110008\" success: pods \"pod-9b6520dc-25d9-11e7-897d-0242ac110008\" not found",
    }
    expected pod "pod-9b6520dc-25d9-11e7-897d-0242ac110008" success: pods "pod-9b6520dc-25d9-11e7-897d-0242ac110008" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should create and update matching ingresses in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4234a81d0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31825 #36088

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420c78760>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4202965c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42196c230>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should create matching services in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42303b000>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422bb6a40>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:86
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:255

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:596
Expected error:
    <*errors.errorString | 0xc42208b550>: {
        s: "gave up waiting for pod 'write-pod-983gs' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-983gs' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4227fa020>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34102

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420b17ea0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects all resources in the namespace should be deleted when namespace is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422a02fc0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421493f10>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:205
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420680080>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4229d0070>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34211

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421530050>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422877a60>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #28041

Failed: [k8s.io] Federation replicasets [Feature:Federation] ReplicaSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc423189020>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:39
Apr 20 07:58:22.744: Failed to create node pool "test-pool". Err: exit status 1
ERROR: (gcloud.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:53

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc4218681b0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.1.683+2c6fbc95c43b9f --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.1.683+2c6fbc95c43b9f --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:91

Issues about this test specifically: #38172

Failed: [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc423f543a0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34581 #43099

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421530170>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34764

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should create and update matching deployments in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421f98b80>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Networking IPerf [Experimental] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_perf.go:153
Apr 20 06:59:43.989: Failed parsing value bandwidth port from the string '3528585620
' as an integer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util_iperf.go:102

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422989890>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4229413f0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostIPC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:104
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has no authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422c93790>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4228c7610>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2538/
Multiple broken tests:

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420ab1290>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] iSCSI should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:516
Expected error:
    <*errors.errorString | 0xc4203d2f90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:539
Expected error:
    <*errors.errorString | 0xc423b60030>: {
        s: "gave up waiting for pod 'write-pod-pcqq8' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-pcqq8' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects all resources in the namespace should be deleted when namespace is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4214e5b60>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc423ee9870>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #28041

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc423ede5c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:86
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:255

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] CephFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:657
Expected error:
    <*errors.errorString | 0xc4203d2f90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] Ceph RBD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:591
Expected error:
    <*errors.errorString | 0xc4203d2f90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421f42b30>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc423b54180>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34581 #43099

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should create matching services in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421f52aa0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow starting 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 270081163819
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: [k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:78
Apr 20 16:40:16.686: Failed to GC pods within 2m0s, 1000 pods remaining, error: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:76

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421f01250>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should not be deleted from underlying clusters when it is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421506370>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:39
Apr 20 17:03:38.016: Failed to create node pool "test-pool". Err: exit status 1
ERROR: (gcloud.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:53

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:100
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421f41440>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4211ae180>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] etcd uUpgrade [Feature:EtcdUpgrade] [k8s.io] etcd upgrade should maintain a functioning cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:135
Expected error:
    <*errors.errorString | 0xc421facec0>: {
        s: "EtcdUpgrade() is not implemented for provider gke",
    }
    EtcdUpgrade() is not implemented for provider gke
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:127

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4214dc130>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421cf38f0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc423b2dfc0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42113b8c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should create and update matching ingresses in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42253dda0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31825 #36088

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:547
Expected error:
    <*errors.errorString | 0xc421cf3030>: {
        s: "gave up waiting for pod 'write-pod-7g3px' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-7g3px' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should create and update matching deployments in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420decd30>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected
    <int>: 5
to equal
    <int>: 4
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:80

Issues about this test specifically: #33793 #35108 #35744

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc422d86fb0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.96+3153cd6841cc51 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.96+3153cd6841cc51 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:91

Issues about this test specifically: #38172

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain responsive services [Feature:ExperimentalNodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:83
Expected error:
    <*errors.errorString | 0xc422744b00>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.96+3153cd6841cc51 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.96+3153cd6841cc51 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:75

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses Ingress connectivity and DNS should be able to connect to a federated ingress via its load balancer {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc423ef0250>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #32732 #35392 #36074

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc423b41da0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Expected error:
    <*errors.errorString | 0xc421269a70>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.96+3153cd6841cc51 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.96+3153cd6841cc51 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:45

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc423ed8360>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42251d3f0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4228fe2c0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4224a13c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:605
Expected error:
    <*errors.errorString | 0xc421e3bd10>: {
        s: "gave up waiting for pod 'write-pod-f616k' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-f616k' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4211042d0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33804

Failed: [k8s.io] Federation apiserver [Feature:Federation] Cluster objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420bf3ee0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27738

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:596
Expected error:
    <*errors.errorString | 0xc423b61720>: {
        s: "gave up waiting for pod 'write-pod-m1hml' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-m1hml' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:385
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.222.115 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-lqnzm exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e \"CREATE DATABASE IF NOT EXISTS foo;\"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n [] <nil> 0xc4223cab40 exit status 1 <nil> <nil> true [0xc4200b0378 0xc4200b03b0 0xc4200b03e8] [0xc4200b0378 0xc4200b03b0 0xc4200b03e8] [0xc4200b03a8 0xc4200b03d0] [0x9747f0 0x9747f0] 0xc4224153e0 <nil>}:\nCommand stdout:\n\nstderr:\nError: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.222.115 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-lqnzm exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e "CREATE DATABASE IF NOT EXISTS foo;"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
     [] <nil> 0xc4223cab40 exit status 1 <nil> <nil> true [0xc4200b0378 0xc4200b03b0 0xc4200b03e8] [0xc4200b0378 0xc4200b03b0 0xc4200b03e8] [0xc4200b03a8 0xc4200b03d0] [0x9747f0 0x9747f0] 0xc4224153e0 <nil>}:
    Command stdout:
    
    stderr:
    Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2077

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain responsive services [Feature:ExperimentalClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:117
Expected error:
    <*errors.errorString | 0xc42259e670>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.82+2425f58133739b --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.82+2425f58133739b --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:107

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc423b3acc0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2539/
Multiple broken tests:

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc422e96130>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.100+d3b3c31147af7d --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.100+d3b3c31147af7d --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:91

Issues about this test specifically: #38172

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:563
Expected error:
    <*errors.errorString | 0xc421c30030>: {
        s: "gave up waiting for pod 'write-pod-g9447' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-g9447' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421d96000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow running maximum capacity pods on nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 140046320638
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: [k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:78
Apr 20 20:25:15.517: Failed to GC pods within 2m0s, 1000 pods remaining, error: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:76

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421ad88e0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has no authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422501180>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422de4b10>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42252dae0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain responsive services [Feature:ExperimentalClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:117
Expected error:
    <*errors.errorString | 0xc4226905b0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.100+d3b3c31147af7d --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.100+d3b3c31147af7d --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:107

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc423736580>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:205
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostPID {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:108
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] Density [Feature:HighDensityPerformance] should allow starting 95 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:641
Expected error:
    <*errors.errorString | 0xc4204d74b0>: {
        s: "Only 243 pods started out of 285",
    }
    Only 243 pods started out of 285
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:199

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should not be deleted from underlying clusters when it is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42069f1d0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4227889c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31317 #31457

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Networking IPerf [Experimental] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_perf.go:153
Apr 21 00:59:49.206: Failed parsing value bandwidth port from the string '3627545109
' as an integer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util_iperf.go:102

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects all resources in the namespace should be deleted when namespace is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421b2a5e0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422863180>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.StatusError | 0xc421b8eb00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-nettest-lgb9r/pods/netserver-0\\\"\") has prevented the request from succeeding (get pods netserver-0)",
            Reason: "InternalError",
            Details: {
                Name: "netserver-0",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-nettest-lgb9r/pods/netserver-0\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-nettest-lgb9r/pods/netserver-0\"") has prevented the request from succeeding (get pods netserver-0)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:546

Issues about this test specifically: #34250

Failed: [k8s.io] Initial Resources [Feature:InitialResources] [Flaky] should set initial resources based on historical data {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/initial_resources.go:51
Expected error:
    <*errors.StatusError | 0xc421e8b780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server rejected our request for an unknown reason (post services ir-0-ctrl)",
            Reason: "BadRequest",
            Details: {
                Name: "ir-0-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "incorrect function argument",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 400,
        },
    }
    the server rejected our request for an unknown reason (post services ir-0-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:243

Failed: [k8s.io] Federation deployments [Feature:Federation] Deployment objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4229fe6b0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should create and update matching deployments in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc423762d20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:385
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.24.7 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-jxfp1 exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e \"CREATE DATABASE IF NOT EXISTS foo;\"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n [] <nil> 0xc4225f71d0 exit status 1 <nil> <nil> true [0xc420038280 0xc420038298 0xc4200382b0] [0xc420038280 0xc420038298 0xc4200382b0] [0xc420038290 0xc4200382a8] [0x9747f0 0x9747f0] 0xc421a6bb00 <nil>}:\nCommand stdout:\n\nstderr:\nError: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.24.7 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-jxfp1 exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e "CREATE DATABASE IF NOT EXISTS foo;"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
     [] <nil> 0xc4225f71d0 exit status 1 <nil> <nil> true [0xc420038280 0xc420038298 0xc4200382b0] [0xc420038280 0xc420038298 0xc4200382b0] [0xc420038290 0xc4200382a8] [0x9747f0 0x9747f0] 0xc421a6bb00 <nil>}:
    Command stdout:
    
    stderr:
    Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2077

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should create and update matching replicasets in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422e9d320>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:555
Expected error:
    <*errors.errorString | 0xc4220a3cf0>: {
        s: "gave up waiting for pod 'write-pod-543v0' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-543v0' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:86
Node gke-bootstrap-e2e-default-pool-263b92f8-4dvc did not run out of disk within 5m0s
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:251

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420a02df0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:605
Expected error:
    <*errors.errorString | 0xc421b2a550>: {
        s: "gave up waiting for pod 'write-pod-jsbjq' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-jsbjq' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422dfa9c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should accept cluster resources when the client has right authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420a29360>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4222da2e0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33804

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:614
Expected error:
    <*errors.errorString | 0xc421013000>: {
        s: "gave up waiting for pod 'write-pod-fpn5p' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-fpn5p' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4211e9ab0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] Ceph RBD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:591
Expected error:
    <*errors.errorString | 0xc4203fae30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service should be able to discover a non-local federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4229fe330>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27739

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422404530>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422e965a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #28041

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421b34030>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34211

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2540/
Multiple broken tests:

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4210fc2d0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34102

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostIPC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:104
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow running maximum capacity pods on nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 160077609250
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should create and update matching ingresses in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4231d0f80>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31825 #36088

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should accept cluster resources when the client has right authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4227a7330>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421a1c1f0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

Failed: [k8s.io] Initial Resources [Feature:InitialResources] [Flaky] should set initial resources based on historical data {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/initial_resources.go:51
Expected error:
    <*errors.StatusError | 0xc421031a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server rejected our request for an unknown reason (post services ir-0-ctrl)",
            Reason: "BadRequest",
            Details: {
                Name: "ir-0-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "incorrect function argument",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 400,
        },
    }
    the server rejected our request for an unknown reason (post services ir-0-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:243

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses Ingress connectivity and DNS should be able to connect to a federated ingress via its load balancer {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420be3cd0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #32732 #35392 #36074

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should create and update matching replicasets in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4227cf140>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42290a130>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation events [Feature:Federation] Event objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4227ce080>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #30644 #30831

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostPID {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:108
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc42166e000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33897 #37458

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4206c4920>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42199a930>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc42166e1d0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34581 #43099

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:218
Expected error:
    <*errors.StatusError | 0xc4225a3700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"nfs-server\" not found",
            Reason: "NotFound",
            Details: {Name: "nfs-server", Group: "", Kind: "pods", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    pods "nfs-server" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:61

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] Federation apiserver [Feature:Federation] Cluster objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422b60a00>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27738

Failed: [k8s.io] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc422154000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33804

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42208eeb0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Deployment objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420bfd330>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420d26e50>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:614
Expected error:
    <*errors.errorString | 0xc421a1d420>: {
        s: "gave up waiting for pod 'write-pod-9b61r' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-9b61r' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] CephFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:657
Expected error:
    <*errors.errorString | 0xc4203d30f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:539
Expected error:
    <*errors.errorString | 0xc420961bb0>: {
        s: "gave up waiting for pod 'write-pod-hs69t' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-hs69t' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should create and update matching deployments in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4221541d0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422622000>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain responsive services [Feature:ExperimentalNodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:83
Expected error:
    <*errors.errorString | 0xc4216ee180>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.104+cd37aaafb3b057 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.104+cd37aaafb3b057 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:75

Failed: [k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:385
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.229.120 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-h5s4g exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e \"CREATE DATABASE IF NOT EXISTS foo;\"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n [] <nil> 0xc422b2c1b0 exit status 1 <nil> <nil> true [0xc4200362e8 0xc420036330 0xc420036360] [0xc4200362e8 0xc420036330 0xc420036360] [0xc420036318 0xc420036350] [0x9747f0 0x9747f0] 0xc4231752c0 <nil>}:\nCommand stdout:\n\nstderr:\nError: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.229.120 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-h5s4g exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e "CREATE DATABASE IF NOT EXISTS foo;"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
     [] <nil> 0xc422b2c1b0 exit status 1 <nil> <nil> true [0xc4200362e8 0xc420036330 0xc420036360] [0xc4200362e8 0xc420036330 0xc420036360] [0xc420036318 0xc420036350] [0x9747f0 0x9747f0] 0xc4231752c0 <nil>}:
    Command stdout:
    
    stderr:
    Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2077

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:563
Expected error:
    <*errors.errorString | 0xc42075da20>: {
        s: "gave up waiting for pod 'write-pod-1bk25' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-1bk25' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] etcd uUpgrade [Feature:EtcdUpgrade] [k8s.io] etcd upgrade should maintain a functioning cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:135
Expected error:
    <*errors.errorString | 0xc42166e020>: {
        s: "EtcdUpgrade() is not implemented for provider gke",
    }
    EtcdUpgrade() is not implemented for provider gke
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:127

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420a0b130>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420e6e370>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain responsive services [Feature:ExperimentalClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:117
Expected error:
    <*errors.errorString | 0xc4227cf4f0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.104+cd37aaafb3b057 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.104+cd37aaafb3b057 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:107

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] iSCSI should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:516
Expected error:
    <*errors.errorString | 0xc4203d30f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:555
Expected error:
    <*errors.errorString | 0xc42264d200>: {
        s: "gave up waiting for pod 'write-pod-djfzp' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-djfzp' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Density [Feature:HighDensityPerformance] should allow starting 95 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:641
Expected error:
    <*errors.errorString | 0xc421980440>: {
        s: "Only 238 pods started out of 285",
    }
    Only 238 pods started out of 285
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:199

Failed: [k8s.io] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:205
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow starting 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:641
Expected error:
    <*errors.errorString | 0xc422447010>: {
        s: "Only 238 pods started out of 300",
    }
    Only 238 pods started out of 300
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:199

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2542/
Multiple broken tests:

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420f628c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects all resources in the namespace should be deleted when namespace is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422994b50>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:547
Expected error:
    <*errors.errorString | 0xc42105b8f0>: {
        s: "gave up waiting for pod 'write-pod-gvqk7' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-gvqk7' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Expected error:
    <*errors.errorString | 0xc421b4cf00>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.166+a121d1c674d2b7 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.166+a121d1c674d2b7 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:45

Failed: [k8s.io] Density [Feature:HighDensityPerformance] should allow starting 95 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:641
Expected error:
    <*errors.errorString | 0xc421f84850>: {
        s: "Only 238 pods started out of 285",
    }
    Only 238 pods started out of 285
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:199

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain responsive services [Feature:ExperimentalNodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:83
Expected error:
    <*errors.errorString | 0xc42299c660>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.166+a121d1c674d2b7 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.166+a121d1c674d2b7 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:75

Failed: [k8s.io] Federation apiserver [Feature:Federation] Cluster objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42201a790>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27738

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42294d410>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Opaque resources [Feature:OpaqueResources] should account opaque integer resources in pods with multiple containers. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/opaque_resource.go:62
Expected error:
    <*errors.errorString | 0xc4203acd50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/opaque_resource.go:256

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] CephFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:657
Expected error:
    <*errors.errorString | 0xc4203acd50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421b05fb0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4229f8f40>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42201a4c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4221f0000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34581 #43099

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain responsive services [Feature:ExperimentalClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:117
Expected error:
    <*errors.errorString | 0xc4229ef030>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.166+a121d1c674d2b7 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.166+a121d1c674d2b7 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:107

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] Ceph RBD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:591
Expected error:
    <*errors.errorString | 0xc4203acd50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 22 00:22:07.867: Could not reach HTTP service through 104.154.215.145:80 after 2s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2530

Issues about this test specifically: #37428 #40256

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] iSCSI should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:516
Expected error:
    <*errors.errorString | 0xc4203acd50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Opaque resources [Feature:OpaqueResources] should not schedule pods that exceed the available amount of opaque integer resource. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/opaque_resource.go:62
Expected error:
    <*errors.errorString | 0xc4203acd50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/opaque_resource.go:256

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422961ae0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4227bf420>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:605
Expected error:
    <*errors.errorString | 0xc422ddbf60>: {
        s: "gave up waiting for pod 'write-pod-v4mjf' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-v4mjf' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should not be deleted from underlying clusters when it is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4229af9c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:563
Expected error:
    <*errors.errorString | 0xc421b4c030>: {
        s: "gave up waiting for pod 'write-pod-pwbp8' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-pwbp8' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42251f270>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:39
Apr 21 21:59:27.311: Failed to create node pool "test-pool". Err: exit status 1
ERROR: (gcloud.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:53

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421fc2200>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4229c6130>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33793 #35108 #35744

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:596
Expected error:
    <*errors.errorString | 0xc4227beb40>: {
        s: "gave up waiting for pod 'write-pod-k4k6r' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-k4k6r' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4229c0790>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #28041

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc4229f9160>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.166+a121d1c674d2b7 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.166+a121d1c674d2b7 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:91

Issues about this test specifically: #38172

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should accept cluster resources when the client has right authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42294b360>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:539
Expected error:
    <*errors.errorString | 0xc4229f8030>: {
        s: "gave up waiting for pod 'write-pod-gfmxh' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-gfmxh' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has no authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4229f82d0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421070cd0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422994230>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421b4c000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4230bfd60>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:205
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should create and update matching replicasets in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4213c2900>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:614
Expected error:
    <*errors.errorString | 0xc4230be050>: {
        s: "gave up waiting for pod 'write-pod-m32wk' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-m32wk' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:218
Expected error:
    <*errors.StatusError | 0xc423133980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"nfs-server\" not found",
            Reason: "NotFound",
            Details: {Name: "nfs-server", Group: "", Kind: "pods", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    pods "nfs-server" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:61

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421b4c130>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34211

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2543/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:547
Expected error:
    <*errors.errorString | 0xc420ffc7b0>: {
        s: "gave up waiting for pod 'write-pod-kk8fh' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-kk8fh' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should create and update matching deployments in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4222e2b00>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects all resources in the namespace should be deleted when namespace is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4206729f0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421ce8cf0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation events [Feature:Federation] Event objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422f55000>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #30644 #30831

Failed: [k8s.io] Federation deployments [Feature:Federation] Deployment objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420ecb890>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4204500e0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33793 #35108 #35744

Failed: [k8s.io] etcd uUpgrade [Feature:EtcdUpgrade] [k8s.io] etcd upgrade should maintain a functioning cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:135
Expected error:
    <*errors.errorString | 0xc422f62c30>: {
        s: "EtcdUpgrade() is not implemented for provider gke",
    }
    EtcdUpgrade() is not implemented for provider gke
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:127

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should create and update matching ingresses in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420d34320>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31825 #36088

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4210e3030>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421fc1c10>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses Ingress connectivity and DNS should be able to connect to a federated ingress via its load balancer {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42131cdd0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #32732 #35392 #36074

Failed: [k8s.io] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:44
Apr 22 04:53:12.797: Failed to create node pool np-ssd: Err: exit status 1
ERROR: (gcloud.alpha.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:55

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:614
Expected error:
    <*errors.errorString | 0xc421941f20>: {
        s: "gave up waiting for pod 'write-pod-4plbt' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-4plbt' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421324750>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should create matching services in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421359500>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421abc030>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34211

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420b765b0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4210bc000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34581 #43099

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4223b87f0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] ReplicaSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4228181d0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4227f2390>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33897 #37458

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should accept cluster resources when the client has right authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420c19780>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain a functioning cluster [Feature:NodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:69
Expected error:
    <*errors.errorString | 0xc422ee0ef0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.168+e0ba40b67c76b0 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.168+e0ba40b67c76b0 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:61

Failed: [k8s.io] Opaque resources [Feature:OpaqueResources] should account opaque integer resources in pods with multiple containers. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/opaque_resource.go:62
Expected error:
    <*errors.errorString | 0xc4203acd00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/opaque_resource.go:256

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] iSCSI should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:516
Expected error:
    <*errors.errorString | 0xc4203acd00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:605
Expected error:
    <*errors.errorString | 0xc420c3e120>: {
        s: "gave up waiting for pod 'write-pod-0b886' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-0b886' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4223e2090>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should not be deleted from underlying clusters when it is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422f55c30>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Density [Feature:HighDensityPerformance] should allow starting 95 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:641
Expected error:
    <*errors.errorString | 0xc42209d960>: {
        s: "Only 418 pods started out of 475",
    }
    Only 418 pods started out of 475
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:199

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:539
Expected error:
    <*errors.errorString | 0xc4210f9c80>: {
        s: "gave up waiting for pod 'write-pod-x6gh6' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-x6gh6' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service should be able to discover a non-local federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420225f90>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27739

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:555
Expected error:
    <*errors.errorString | 0xc4222b2030>: {
        s: "gave up waiting for pod 'write-pod-3s5cb' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-3s5cb' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostIPC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:104
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2544/
Multiple broken tests:

Failed: [k8s.io] Federation events [Feature:Federation] Event objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421a9a050>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #30644 #30831

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc420ba61e0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34102

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421a3cc40>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Density [Feature:HighDensityPerformance] should allow starting 95 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:641
Expected error:
    <*errors.errorString | 0xc4215d2720>: {
        s: "too high pod startup latency 90th percentile: 5.218155334s",
    }
    too high pod startup latency 90th percentile: 5.218155334s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:627

Failed: [k8s.io] etcd uUpgrade [Feature:EtcdUpgrade] [k8s.io] etcd upgrade should maintain a functioning cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:135
Expected error:
    <*errors.errorString | 0xc4216e7ef0>: {
        s: "EtcdUpgrade() is not implemented for provider gke",
    }
    EtcdUpgrade() is not implemented for provider gke
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:127

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422064da0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should create matching services in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422fe3a10>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses Ingress connectivity and DNS should be able to connect to a federated ingress via its load balancer {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421e39730>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #32732 #35392 #36074

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:614
Expected error:
    <*errors.errorString | 0xc422fe26c0>: {
        s: "gave up waiting for pod 'write-pod-8hld6' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-8hld6' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:100
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422caec20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:218
Expected error:
    <*errors.StatusError | 0xc422017400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"nfs-server\" not found",
            Reason: "NotFound",
            Details: {Name: "nfs-server", Group: "", Kind: "pods", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    pods "nfs-server" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:61

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:136
Expected error:
    <*errors.errorString | 0xc422f90de0>: {
        s: "gave up waiting for pod 'pvc-volume-tester-pngkf' to be 'success or failure' after 15m0s",
    }
    gave up waiting for pod 'pvc-volume-tester-pngkf' to be 'success or failure' after 15m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:232

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421284db0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc42272e460>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33891 #43360

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Apr 22 15:50:24.511: Could not reach HTTP service through 35.188.13.17:80 after 2s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2530

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:555
Expected error:
    <*errors.errorString | 0xc42203a550>: {
        s: "gave up waiting for pod 'write-pod-8s76q' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-8s76q' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should create and update matching replicasets in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42076d170>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:547
Expected error:
    <*errors.errorString | 0xc422625680>: {
        s: "gave up waiting for pod 'write-pod-zxnpz' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-zxnpz' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420f9f860>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:39
Apr 22 15:48:40.634: Failed to create node pool "test-pool". Err: exit status 1
ERROR: (gcloud.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:53

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420b19120>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4213de000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34764

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain a functioning cluster [Feature:NodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:69
Expected error:
    <*errors.errorString | 0xc4216e7be0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.170+c2892866476422 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.170+c2892866476422 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:61

Failed: [k8s.io] Federation apiserver [Feature:Federation] Cluster objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421ac0140>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27738

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow running maximum capacity pods on nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 160046957961
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service should be able to discover a non-local federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4228b09e0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27739

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421fb9dd0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31317 #31457

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc422292b40>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.170+c2892866476422 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.170+c2892866476422 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:91

Issues about this test specifically: #38172

Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:93
Apr 22 16:14:10.387: Test failed; at least one node failed to reboot in the time given.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:169

Issues about this test specifically: #33874

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4216de000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420b19ea0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422fe2750>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostPID {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:108
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4213fee70>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:605
Expected error:
    <*errors.errorString | 0xc421f4aaa0>: {
        s: "gave up waiting for pod 'write-pod-mtmwv' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-mtmwv' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4222366a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #28041

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4216e62e0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:539
Expected error:
    <*errors.errorString | 0xc421b21110>: {
        s: "gave up waiting for pod 'write-pod-gztgt' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-gztgt' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4215949a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:44
Apr 22 15:34:07.455: Failed to create node pool np-ssd: Err: exit status 1
ERROR: (gcloud.alpha.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:55

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4226f4d50>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Expected error:
    <*errors.errorString | 0xc4203d0de0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1730

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain responsive services [Feature:ExperimentalNodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:83
Expected error:
    <*errors.errorString | 0xc4216d35d0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.170+c2892866476422 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.170+c2892866476422 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:75

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2545/
Multiple broken tests:

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected
    <int>: 5
to equal
    <int>: 4
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:80

Issues about this test specifically: #34211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should create and update matching ingresses in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4227f0510>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31825 #36088

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Expected error:
    <*errors.errorString | 0xc4222b8490>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.172+28b47b5ebdf67d --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.172+28b47b5ebdf67d --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:45

Failed: [k8s.io] [Feature:Federation] Federated Services DNS should be able to discover a federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421b09930>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4210181b0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34581 #43099

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421f349e0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow running maximum capacity pods on nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 160042146186
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4216bc2c0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33891 #43360

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42168ac00>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Networking IPerf [Experimental] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_perf.go:153
Apr 23 02:09:33.422: Failed parsing value bandwidth port from the string '3878957725
' as an integer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util_iperf.go:102

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420268200>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422b88b50>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:77
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421cdba20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4222b8160>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33897 #37458

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4229ca9b0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:246
Apr 22 22:57:03.348: Pods on node gke-bootstrap-e2e-default-pool-f6aaf0d5-n928 are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:182

Issues about this test specifically: #36794

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow starting 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 270066876105
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:218
Expected error:
    <*errors.StatusError | 0xc423573300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"nfs-server\" not found",
            Reason: "NotFound",
            Details: {Name: "nfs-server", Group: "", Kind: "pods", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    pods "nfs-server" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:61

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:39
Apr 23 02:44:21.501: Failed to create node pool "test-pool". Err: exit status 1
ERROR: (gcloud.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:53

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420eefde0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:555
Expected error:
    <*errors.errorString | 0xc4209ec970>: {
        s: "gave up waiting for pod 'write-pod-ngw4s' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-ngw4s' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc4214fdb50>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.172+28b47b5ebdf67d --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.172+28b47b5ebdf67d --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:91

Issues about this test specifically: #38172

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4214046c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42170aa50>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should create and update matching replicasets in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422d3cc90>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service should be able to discover a non-local federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420dad540>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27739

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*errors.StatusError | 0xc421833400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.72.1.4:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'\") has prevented the request from succeeding (post services test-deployment-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "test-deployment-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'EOF'\nTrying to reach: 'http://10.72.1.4:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.72.1.4:8080/ConsumeCPU?durationSec=30&millicores=10&requestSizeMillicores=20'") has prevented the request from succeeding (post services test-deployment-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:212

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4216c6fc0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #28041

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] Ceph RBD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:591
Expected error:
    <*errors.errorString | 0xc4203d2230>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4214d7230>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421732000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33793 #35108 #35744

Failed: [k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422b317c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421698000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33804

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2546/
Multiple broken tests:

Failed: [k8s.io] etcd uUpgrade [Feature:EtcdUpgrade] [k8s.io] etcd upgrade should maintain a functioning cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:135
Expected error:
    <*errors.errorString | 0xc421994090>: {
        s: "EtcdUpgrade() is not implemented for provider gke",
    }
    EtcdUpgrade() is not implemented for provider gke
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:127

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421539240>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should accept cluster resources when the client has right authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420d39800>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Expected error:
    <*errors.errorString | 0xc4224f04a0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.172+28b47b5ebdf67d --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.172+28b47b5ebdf67d --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:45

Failed: [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc420f36040>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34581 #43099

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow starting 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 260060753706
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4215204a0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34211

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should not be deleted from underlying clusters when it is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4234d27c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422164830>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:44
Apr 23 08:15:00.876: Failed to create node pool np-ssd: Err: exit status 1
ERROR: (gcloud.alpha.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:55

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc423534000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service should be able to discover a non-local federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421c7b3a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27739

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:218
Expected error:
    <*errors.StatusError | 0xc4223a5480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"nfs-server\" not found",
            Reason: "NotFound",
            Details: {Name: "nfs-server", Group: "", Kind: "pods", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    pods "nfs-server" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:61

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4218233c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:596
Expected error:
    <*errors.errorString | 0xc423534c60>: {
        s: "gave up waiting for pod 'write-pod-4gnr4' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-4gnr4' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420a36fb0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Expected error:
    <*errors.errorString | 0xc4226e9e80>: {
        s: "couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.009516333s elapsed)",
    }
    couldn't find 3 nodes within 20s; last error: expected to find 3 nodes but found only 4 (20.009516333s elapsed)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:397

Issues about this test specifically: #37373

Failed: [k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:385
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.42.48 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-2jscq exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e \"CREATE DATABASE IF NOT EXISTS foo;\"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n [] <nil> 0xc421ca4240 exit status 1 <nil> <nil> true [0xc421638020 0xc421638038 0xc421638050] [0xc421638020 0xc421638038 0xc421638050] [0xc421638030 0xc421638048] [0x9747f0 0x9747f0] 0xc421b6ae40 <nil>}:\nCommand stdout:\n\nstderr:\nError: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.42.48 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-2jscq exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e "CREATE DATABASE IF NOT EXISTS foo;"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
     [] <nil> 0xc421ca4240 exit status 1 <nil> <nil> true [0xc421638020 0xc421638038 0xc421638050] [0xc421638020 0xc421638038 0xc421638050] [0xc421638030 0xc421638048] [0x9747f0 0x9747f0] 0xc421b6ae40 <nil>}:
    Command stdout:
    
    stderr:
    Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2077

Failed: [k8s.io] Federation deployments [Feature:Federation] Deployment objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4225c82e0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Networking IPerf [Experimental] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_perf.go:153
Apr 23 10:30:18.436: Failed parsing value bandwidth port from the string '1962682812
' as an integer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util_iperf.go:102

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42194ad50>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc422306050>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33891 #43360

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should create matching services in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc423533000>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc4229da1b0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.177+1235365aa69faf --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.177+1235365aa69faf --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:91

Issues about this test specifically: #38172

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should create and update matching replicasets in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421f3e6e0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420f0afc0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4216f43b0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #28041

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42194da20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422513820>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] CephFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:657
Expected error:
    <*errors.errorString | 0xc4203aad80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:614
Expected error:
    <*errors.errorString | 0xc422b10f80>: {
        s: "gave up waiting for pod 'write-pod-bw706' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-bw706' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should create and update matching ingresses in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422b11b10>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31825 #36088

Failed: [k8s.io] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:205
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422300580>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31317 #31457

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421cc2830>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:39
Apr 23 09:02:04.046: Failed to create node pool "test-pool". Err: exit status 1
ERROR: (gcloud.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:53

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42111a400>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] iSCSI should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:516
Expected error:
    <*errors.errorString | 0xc4203aad80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc42194c000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33897 #37458

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421c36ed0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420f37d10>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain a functioning cluster [Feature:NodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:69
Expected error:
    <*errors.errorString | 0xc4219cead0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.177+1235365aa69faf --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.177+1235365aa69faf --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:61

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain responsive services [Feature:ExperimentalNodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:83
Expected error:
    <*errors.errorString | 0xc4228483e0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.177+1235365aa69faf --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.177+1235365aa69faf --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:75

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2547/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc4203d2e90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34250

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should not be deleted from underlying clusters when it is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4211cbde0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] iSCSI should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:516
Expected error:
    <*errors.errorString | 0xc4203d2e90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c11af0>: {
        s: "4 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-12a9ab76-g0g1 gke-bootstrap-e2e-default-pool-12a9ab76-g0g1 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:12:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:14:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:12:57 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-wzpk9                                 gke-bootstrap-e2e-default-pool-12a9ab76-zz52 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:15:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:15:20 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:15:00 -0700 PDT  }]\nkube-dns-autoscaler-395097547-21mfg                                gke-bootstrap-e2e-default-pool-12a9ab76-g0g1 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:14:46 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:14:52 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:14:46 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-12a9ab76-g0g1            gke-bootstrap-e2e-default-pool-12a9ab76-g0g1 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:12:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:12:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:12:57 -0700 PDT  }]\n",
    }
    4 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-12a9ab76-g0g1 gke-bootstrap-e2e-default-pool-12a9ab76-g0g1 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:12:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:14:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:12:57 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-wzpk9                                 gke-bootstrap-e2e-default-pool-12a9ab76-zz52 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:15:00 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:15:20 -0700 PDT ContainersNotReady containers with unready status: [heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:15:00 -0700 PDT  }]
    kube-dns-autoscaler-395097547-21mfg                                gke-bootstrap-e2e-default-pool-12a9ab76-g0g1 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:14:46 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:14:52 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:14:46 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-12a9ab76-g0g1            gke-bootstrap-e2e-default-pool-12a9ab76-g0g1 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:12:57 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:12:59 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-23 12:12:57 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:385
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.119.40 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-078tf exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e \"CREATE DATABASE IF NOT EXISTS foo;\"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n [] <nil> 0xc42123c750 exit status 1 <nil> <nil> true [0xc4200360a8 0xc4200360c0 0xc4200361a0] [0xc4200360a8 0xc4200360c0 0xc4200361a0] [0xc4200360b8 0xc420036198] [0x9747f0 0x9747f0] 0xc421223d40 <nil>}:\nCommand stdout:\n\nstderr:\nError: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.119.40 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-078tf exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e "CREATE DATABASE IF NOT EXISTS foo;"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
     [] <nil> 0xc42123c750 exit status 1 <nil> <nil> true [0xc4200360a8 0xc4200360c0 0xc4200361a0] [0xc4200360a8 0xc4200360c0 0xc4200361a0] [0xc4200360b8 0xc420036198] [0x9747f0 0x9747f0] 0xc421223d40 <nil>}:
    Command stdout:
    
    stderr:
    Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2077

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 18:25:27.538: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421dc0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29657

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 17:59:14.359: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f7e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:86
Expected error:
    <*errors.errorString | 0xc420f8c820>: {
        s: "failed running \"fallocate -l 18446744073557655552 test.img\": <nil> (exit code 1)",
    }
    failed running "fallocate -l 18446744073557655552 test.img": <nil> (exit code 1)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:248

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:22:22.493: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421676278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203d2e90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420e56850>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 13:21:10.423: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420df5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 14:43:39.917: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c9c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37774

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:08:47.946: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b70278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36950

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:12:36.504: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216e4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26191

Failed: [k8s.io] [Feature:Example] [k8s.io] Hazelcast should create and scale hazelcast {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 17:55:26.884: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42020b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27850 #30672 #33271

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d52bb0>: {
        s: "Namespace e2e-tests-events-cxxj3 is active",
    }
    Namespace e2e-tests-events-cxxj3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:15:47.607: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201cb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29050

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 15:46:51.184: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211bec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421e42a90>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4202244b0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31317 #31457

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 19:36:17.140: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211f1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 15:36:38.159: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216d8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostPID {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:108
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 15:32:54.044: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f5cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 15:22:16.104: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421274c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 19:15:22.065: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201ab678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:19:05.293: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421796c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 12:34:56.971: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420be2278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29467

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 14:10:55.640: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420faac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4204c3a10>: {
        s: "Namespace e2e-tests-events-cxxj3 is active",
    }
    Namespace e2e-tests-events-cxxj3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Downward API volume should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 17:31:12.826: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f33678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 12:45:05.018: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b2b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 14:20:40.690: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214fe278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 19:24:15.041: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421dac278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] [Feature:Example] [k8s.io] CassandraStatefulSet should create statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 18:06:52.757: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421713678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36323 #36469 #38222

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 20:06:33.390: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216f5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #42724

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 17:15:09.307: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a3c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] ESIPP [Slow][Feature:ExternalTrafficLocalOnly] should work for type=LoadBalancer [Slow][Feature:ExternalTrafficLocalOnly] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 14:07:42.149: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420bfd678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36389

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:55:59.394: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201cb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 15:29:25.778: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216ec278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31408

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 13:24:47.418: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421224278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30632

Failed: [k8s.io] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:44:14.360: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421579678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26544 #26938 #27595 #30146 #30469 #31374 #31427 #31433 #31589 #31981 #32257 #33711 #33839 #36547 #37111 #37470

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42011ad40>: {
        s: "Namespace e2e-tests-events-cxxj3 is active",
    }
    Namespace e2e-tests-events-cxxj3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4212ba930>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] volume on tmpfs should have the correct mode using FSGroup {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 17:27:59.564: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421538278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 13:04:43.319: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a6cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:28:45.430: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f81678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ESIPP [Slow][Feature:ExternalTrafficLocalOnly] should work for type=NodePort [Slow][Feature:ExternalTrafficLocalOnly] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 12:48:18.575: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212dcc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37511

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should create and update matching replicasets in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4206e68a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected
    <int>: 2
to equal
    <int>: 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:80

Issues about this test specifically: #33891 #43360

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420941ae0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 19:27:37.185: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c20c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Opaque resources [Feature:OpaqueResources] should schedule pods that do consume opaque integer resources. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:32:04.713: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d68c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 19:18:35.428: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a0d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 19:31:03.568: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222ed678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36242

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] CephFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:657
Expected error:
    <*errors.errorString | 0xc4203d2e90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 15:50:19.669: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421412278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34827

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:38:52.310: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216a7678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 18:22:08.163: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c88c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 19:59:06.776: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f7c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 19:51:21.520: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223a3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain responsive services [Feature:ExperimentalNodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:83
Expected error:
    <*errors.errorString | 0xc4216e3ee0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.181+35159f9c45cb33 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.181+35159f9c45cb33 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:75

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should create and update matching deployments in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4206e7570>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 13:28:10.857: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213b0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 14:14:12.361: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213b4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 20:03:18.961: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42096cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 12:28:30.342: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f22c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4214115f0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 12:38:09.584: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4208ea278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:47:41.172: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216f4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28346

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:555
Expected error:
    <*errors.errorString | 0xc42163e910>: {
        s: "gave up waiting for pod 'write-pod-jsmbh' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-jsmbh' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 13:31:37.913: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421520278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow starting 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:641
Expected error:
    <*errors.errorString | 0xc420c04560>: {
        s: "Only 153 pods started out of 200",
    }
    Only 153 pods started out of 200
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:199

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected
    <int>: 2
to equal
    <int>: 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:80

Issues about this test specifically: #33793 #35108 #35744

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 17:18:22.649: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421388c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216e2bf0>: {
        s: "Namespace e2e-tests-events-cxxj3 is active",
    }
    Namespace e2e-tests-events-cxxj3 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 17:24:45.509: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421761678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 15:19:01.207: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421236c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:78
Apr 23 14:47:28.924: Failed to GC pods within 2m0s, 1000 pods remaining, error: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:76

Failed: [k8s.io] Opaque resources [Feature:OpaqueResources] should account opaque integer resources in pods with multiple containers. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 15:15:27.420: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e0c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 17:34:24.511: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42200f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 19:44:51.642: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421db1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 15:12:02.088: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f3ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37531

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 12:31:43.668: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ffe278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 14:17:27.354: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213ae278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 16:35:32.987: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210d2278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 18:11:13.763: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421944c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc420e56f00>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-12a9ab76-g0g1 boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-12a9ab76-g0g1 boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421e42f30>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 23 19:48:08.190: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42162e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected
    <int>: 2
to equal
    <int>: 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:80

Issues about this test specifically: #34211

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2548/
Multiple broken tests:

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should not be deleted from underlying clusters when it is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421c5eba0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:218
Expected error:
    <*errors.StatusError | 0xc421906100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server rejected our request for an unknown reason (post services ir-0-ctrl)",
            Reason: "BadRequest",
            Details: {
                Name: "ir-0-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "incorrect function argument",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 400,
        },
    }
    the server rejected our request for an unknown reason (post services ir-0-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:243

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain responsive services [Feature:ExperimentalClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:117
Expected error:
    <*errors.errorString | 0xc42144db40>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.183+6b496bcb2e0f76 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.183+6b496bcb2e0f76 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:107

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should create and update matching deployments in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4216339f0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420e37200>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421278940>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420e03080>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421329d60>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:547
Expected error:
    <*errors.errorString | 0xc4211ebf50>: {
        s: "gave up waiting for pod 'write-pod-grctv' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-grctv' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostIPC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:104
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow starting 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:641
Expected error:
    <*errors.errorString | 0xc4211eb2f0>: {
        s: "Only 238 pods started out of 300",
    }
    Only 238 pods started out of 300
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:199

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2549/
Multiple broken tests:

Failed: [k8s.io] Federation replicasets [Feature:Federation] ReplicaSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4222d8ae0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421ec6360>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33891 #43360

Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:121
Apr 24 10:02:45.794: Test failed; at least one node failed to reboot in the time given.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:169

Issues about this test specifically: #33405

Failed: [k8s.io] Federation apiserver [Feature:Federation] Cluster objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4213a19b0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27738

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4216da650>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Deployment objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420785160>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should accept cluster resources when the client has right authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4210fbe80>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4220b0700>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4221ca3e0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420b57ed0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Initial Resources [Feature:InitialResources] [Flaky] should set initial resources based on historical data {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/initial_resources.go:51
Expected error:
    <*errors.StatusError | 0xc42224ca00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server rejected our request for an unknown reason (post services ir-0-ctrl)",
            Reason: "BadRequest",
            Details: {
                Name: "ir-0-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "incorrect function argument",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 400,
        },
    }
    the server rejected our request for an unknown reason (post services ir-0-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:243

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] Ceph RBD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:591
Expected error:
    <*errors.errorString | 0xc4203ab330>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4221cb380>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain a functioning cluster [Feature:NodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:69
Expected error:
    <*errors.errorString | 0xc4212e1540>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.189+ac90c0e45c8766 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.189+ac90c0e45c8766 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:61

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc422128000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34764

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:614
Expected error:
    <*errors.errorString | 0xc421da4c30>: {
        s: "gave up waiting for pod 'write-pod-2jhxw' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-2jhxw' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421b0e040>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33897 #37458

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4221557a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4211e4db0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421f0c000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33793 #35108 #35744

Failed: [k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421303260>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:596
Expected error:
    <*errors.errorString | 0xc42312e030>: {
        s: "gave up waiting for pod 'write-pod-sf0d3' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-sf0d3' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:555
Expected error:
    <*errors.errorString | 0xc421f02430>: {
        s: "gave up waiting for pod 'write-pod-qjdgm' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-qjdgm' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:44
Apr 24 06:11:53.062: Failed to create node pool np-ssd: Err: exit status 1
ERROR: (gcloud.alpha.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:55

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:563
Expected error:
    <*errors.errorString | 0xc4221281c0>: {
        s: "gave up waiting for pod 'write-pod-tnvlt' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-tnvlt' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:205
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421e34460>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420d75220>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421071d90>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:539
Expected error:
    <*errors.errorString | 0xc420cf8880>: {
        s: "gave up waiting for pod 'write-pod-tlfqc' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-tlfqc' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc42140a140>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.193+7a09f8605f7555 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.193+7a09f8605f7555 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:91

Issues about this test specifically: #38172

Failed: [k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:385
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.50.234 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-rs8jw exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e \"CREATE DATABASE IF NOT EXISTS foo;\"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n [] <nil> 0xc422862f30 exit status 1 <nil> <nil> true [0xc4200aa358 0xc4200aa378 0xc4200aa398] [0xc4200aa358 0xc4200aa378 0xc4200aa398] [0xc4200aa368 0xc4200aa390] [0x9747f0 0x9747f0] 0xc422870ea0 <nil>}:\nCommand stdout:\n\nstderr:\nError: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.50.234 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-rs8jw exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e "CREATE DATABASE IF NOT EXISTS foo;"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
     [] <nil> 0xc422862f30 exit status 1 <nil> <nil> true [0xc4200aa358 0xc4200aa378 0xc4200aa398] [0xc4200aa358 0xc4200aa378 0xc4200aa398] [0xc4200aa368 0xc4200aa390] [0x9747f0 0x9747f0] 0xc422870ea0 <nil>}:
    Command stdout:
    
    stderr:
    Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2077

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service should be able to discover a non-local federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420e3c090>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27739

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow running maximum capacity pods on nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 160044601736
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421f8cb70>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421456000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34581 #43099

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422095330>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4217c2870>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #28041

Failed: [k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:78
Apr 24 06:34:03.344: Failed to GC pods within 2m0s, 1000 pods remaining, error: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:76

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420d22730>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should not be deleted from underlying clusters when it is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421eda7c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2550/
Multiple broken tests:

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422210010>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow running maximum capacity pods on nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 160048124521
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] iSCSI should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:516
Expected error:
    <*errors.errorString | 0xc4203882f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4227f95a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31317 #31457

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Expected error:
    <*errors.errorString | 0xc4210f44d0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.219+6236dfb594563e --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.219+6236dfb594563e --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:45

Failed: [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421cb6000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34581 #43099

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects all resources in the namespace should be deleted when namespace is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420987f20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] ReplicaSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42156ee80>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should create and update matching replicasets in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4207425f0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4221ff470>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] CephFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:657
Expected error:
    <*errors.errorString | 0xc4203882f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:385
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.58.72 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-w827v exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e \"CREATE DATABASE IF NOT EXISTS foo;\"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n [] <nil> 0xc4226bdcb0 exit status 1 <nil> <nil> true [0xc4202ba2b0 0xc4202ba2c8 0xc4202ba2f0] [0xc4202ba2b0 0xc4202ba2c8 0xc4202ba2f0] [0xc4202ba2c0 0xc4202ba2e8] [0x9747f0 0x9747f0] 0xc4212e1980 <nil>}:\nCommand stdout:\n\nstderr:\nError: problem using security settings, did you mean to use --insecure?: no CA certificate found\nFailed running \"sql\"\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.58.72 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-statefulset-w827v exec cockroachdb-0 -- /bin/sh -c /cockroach/cockroach sql --host cockroachdb-0.cockroachdb -e "CREATE DATABASE IF NOT EXISTS foo;"] []  <nil>  Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
     [] <nil> 0xc4226bdcb0 exit status 1 <nil> <nil> true [0xc4202ba2b0 0xc4202ba2c8 0xc4202ba2f0] [0xc4202ba2b0 0xc4202ba2c8 0xc4202ba2f0] [0xc4202ba2c0 0xc4202ba2e8] [0x9747f0 0x9747f0] 0xc4212e1980 <nil>}:
    Command stdout:
    
    stderr:
    Error: problem using security settings, did you mean to use --insecure?: no CA certificate found
    Failed running "sql"
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2077

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422bcdb20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostPID {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:108
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should create and update matching ingresses in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4215a87c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #31825 #36088

Failed: [k8s.io] Initial Resources [Feature:InitialResources] [Flaky] should set initial resources based on historical data {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/initial_resources.go:51
Expected error:
    <*errors.StatusError | 0xc422c87700>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server rejected our request for an unknown reason (post services ir-0-ctrl)",
            Reason: "BadRequest",
            Details: {
                Name: "ir-0-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "incorrect function argument",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 400,
        },
    }
    the server rejected our request for an unknown reason (post services ir-0-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:243

Failed: [k8s.io] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421eb0060>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33897 #37458

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422d34880>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has no authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4236e3490>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc420d52020>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33804

Failed: [k8s.io] Federation deployments [Feature:Federation] Deployment objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421e933a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:501
Apr 24 17:32:02.516: Pod test-pod did not start running: pods "" not found
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:456

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422d20d50>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4210fc190>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33891 #43360

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420d64340>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4218bb1f0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420e83030>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #28041

Failed: [k8s.io] Federation apiserver [Feature:Federation] Cluster objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420e0ed80>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27738

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should accept cluster resources when the client has right authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422908ca0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420b51ec0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services DNS should be able to discover a federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42171e2a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420185dd0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Networking IPerf [Experimental] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_perf.go:153
Apr 24 18:01:22.635: Failed parsing value bandwidth port from the string '18650467598
' as an integer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util_iperf.go:102

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should create matching services in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420b82820>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service should be able to discover a non-local federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4227f8e50>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27739

Failed: [k8s.io] Opaque resources [Feature:OpaqueResources] should not schedule pods that exceed the available amount of opaque integer resource. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/opaque_resource.go:62
Expected error:
    <*errors.errorString | 0xc4203882f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/opaque_resource.go:256

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4227f8ca0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain responsive services [Feature:ExperimentalNodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:83
Expected error:
    <*errors.errorString | 0xc421fba180>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.221+ed539fb76f431c --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.221+ed539fb76f431c --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:75

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:555
Expected error:
    <*errors.errorString | 0xc4228e7430>: {
        s: "gave up waiting for pod 'write-pod-p9hpc' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-p9hpc' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:218
Expected error:
    <*errors.errorString | 0xc4219020f0>: {
        s: "gave up waiting for pod 'write-pod-r0q3l' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-r0q3l' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses Ingress connectivity and DNS should be able to connect to a federated ingress via its load balancer {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42154f230>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #32732 #35392 #36074

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc422038030>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2551/
Multiple broken tests:

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.errorString | 0xc4203aab60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:143

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:177
Expected error:
    <*errors.errorString | 0xc4203aab60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:143

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:52:04.150: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210c8a00), (*api.Node)(0xc4210c8c78), (*api.Node)(0xc4210c8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203aab60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4206fab10>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420f8ceb0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 00:32:12.844: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214c2a00), (*api.Node)(0xc4214c2c78), (*api.Node)(0xc4214c2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27957

Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:99
Apr 24 20:42:00.132: Test failed; at least one node failed to reboot in the time given.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:169

Issues about this test specifically: #33882 #35316

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:189
Expected error:
    <*errors.errorString | 0xc4203aab60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:176

Failed: [k8s.io] Garbage collector [Feature:GarbageCollector] should orphan pods created by rc if delete options say so {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 00:44:21.721: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42110b400), (*api.Node)(0xc42110b678), (*api.Node)(0xc42110b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35771

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:51
Expected error:
    <*errors.errorString | 0xc4214562b0>: {
        s: "expected pod \"pod-secrets-3108a1c3-2972-11e7-a3cf-0242ac110009\" success: gave up waiting for pod 'pod-secrets-3108a1c3-2972-11e7-a3cf-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-3108a1c3-2972-11e7-a3cf-0242ac110009" success: gave up waiting for pod 'pod-secrets-3108a1c3-2972-11e7-a3cf-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:116
Did not get a good sample size: []
Less than two runs succeeded; aborting.
Not all RC/pod/service trials succeeded: Only 0 pods started out of 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_latency.go:87

Issues about this test specifically: #30632

Failed: [k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:52
Expected error:
    <*errors.errorString | 0xc42164b7f0>: {
        s: "expected pod \"pod-8e1fb428-2982-11e7-a3cf-0242ac110009\" success: gave up waiting for pod 'pod-8e1fb428-2982-11e7-a3cf-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-8e1fb428-2982-11e7-a3cf-0242ac110009" success: gave up waiting for pod 'pod-8e1fb428-2982-11e7-a3cf-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420fa3000>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:47
Expected error:
    <*errors.errorString | 0xc42011d120>: {
        s: "expected pod \"pod-configmaps-3b590b6d-298c-11e7-a3cf-0242ac110009\" success: gave up waiting for pod 'pod-configmaps-3b590b6d-298c-11e7-a3cf-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-3b590b6d-298c-11e7-a3cf-0242ac110009" success: gave up waiting for pod 'pod-configmaps-3b590b6d-298c-11e7-a3cf-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 0
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 24 21:01:24.287: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4200f8a00), (*api.Node)(0xc4200f8c78), (*api.Node)(0xc4200f8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc42161e8f0>: {
        s: "expected pod \"pod-3be5fd38-299c-11e7-a3cf-0242ac110009\" success: gave up waiting for pod 'pod-3be5fd38-299c-11e7-a3cf-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-3be5fd38-299c-11e7-a3cf-0242ac110009" success: gave up waiting for pod 'pod-3be5fd38-299c-11e7-a3cf-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #30851

Failed: [k8s.io] etcd uUpgrade [Feature:EtcdUpgrade] [k8s.io] etcd upgrade should maintain a functioning cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:135
Apr 25 03:20:50.324: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2649

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc4209e57e0>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:216
Expected error:
    <*errors.errorString | 0xc4203aab60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:109

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc42164bcc0>: {
        s: "expected pod \"pod-35405453-2999-11e7-a3cf-0242ac110009\" success: gave up waiting for pod 'pod-35405453-2999-11e7-a3cf-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-35405453-2999-11e7-a3cf-0242ac110009" success: gave up waiting for pod 'pod-35405453-2999-11e7-a3cf-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc4216d0010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should accept cluster resources when the client has right authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4204be170>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 24 21:04:39.548: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42065aa00), (*api.Node)(0xc42065ac78), (*api.Node)(0xc42065aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35579

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1280
Expected error:
    <*errors.errorString | 0xc4216d0140>: {
        s: "timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.126.50 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-wjc06 run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc4210875c0 Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\n  [] <nil> 0xc4216fd5c0 signal: killed <nil> <nil> true [0xc420454f18 0xc420454f40 0xc420454f50] [0xc420454f18 0xc420454f40 0xc420454f50] [0xc420454f20 0xc420454f38 0xc420454f48] [0x9746f0 0x9747f0 0x9747f0] 0xc421af9ce0 <nil>}:\nCommand stdout:\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false\n\nstderr:\n\n",
    }
    timed out waiting for command &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.126.50 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-wjc06 run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc4210875c0 Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wjc06/e2e-test-rm-busybox-job-57vxc to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-wj

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2552/
Multiple broken tests:

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc42321a150>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33793 #35108 #35744

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421275810>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects all resources in the namespace should be deleted when namespace is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422726040>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain responsive services [Feature:ExperimentalClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:117
Expected error:
    <*errors.errorString | 0xc421f9e370>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.263+51fe7d2ba159e4 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.263+51fe7d2ba159e4 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:107

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain responsive services [Feature:ExperimentalNodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:83
Expected error:
    <*errors.errorString | 0xc4210dc620>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.257+3e05c8f87ffa2f --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.257+3e05c8f87ffa2f --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:75

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422426a50>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421da5130>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421b3c000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34764

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:68
Expected error:
    <*errors.errorString | 0xc421935160>: {
        s: "failed to wait for pods running: [pods \"\" not found]",
    }
    failed to wait for pods running: [pods "" not found]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:371

Issues about this test specifically: #27232

Failed: [k8s.io] etcd uUpgrade [Feature:EtcdUpgrade] [k8s.io] etcd upgrade should maintain a functioning cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:135
Expected error:
    <*errors.errorString | 0xc4213c21c0>: {
        s: "EtcdUpgrade() is not implemented for provider gke",
    }
    EtcdUpgrade() is not implemented for provider gke
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:127

Failed: [k8s.io] Networking IPerf [Experimental] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_perf.go:153
Apr 25 11:42:26.474: Failed parsing value bandwidth port from the string '2108324958
' as an integer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util_iperf.go:102

Failed: [k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:78
Apr 25 05:11:07.487: Failed to GC pods within 2m0s, 1000 pods remaining, error: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:76

Failed: [k8s.io] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:205
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420706540>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42186a280>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should not be deleted from underlying clusters when it is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42186a750>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4223d30a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc420c84020>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34581 #43099

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:100
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:555
Expected error:
    <*errors.errorString | 0xc420f1a000>: {
        s: "gave up waiting for pod 'write-pod-f9zh1' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-f9zh1' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc42195f530>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.263+51fe7d2ba159e4 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.263+51fe7d2ba159e4 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:91

Issues about this test specifically: #38172

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] Ceph RBD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:591
Expected error:
    <*errors.errorString | 0xc4203c4d40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421066190>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33804

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should create and update matching deployments in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422414a20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421f386b0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Deployment objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421ba19f0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation apiserver [Feature:Federation] Cluster objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422632bb0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27738

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should accept cluster resources when the client has right authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420dde140>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Expected error:
    <*errors.errorString | 0xc4231f6e60>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.270+cd380b580b62a9 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.270+cd380b580b62a9 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:45

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420bf5a00>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow running maximum capacity pods on nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 160046596146
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc42323e000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:539
Expected error:
    <*errors.errorString | 0xc422490ad0>: {
        s: "gave up waiting for pod 'write-pod-dlgvj' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-dlgvj' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:563
Expected error:
    <*errors.errorString | 0xc4224907f0>: {
        s: "gave up waiting for pod 'write-pod-txbw4' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-txbw4' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421b54200>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] ReplicaSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4212ee490>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422432d40>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #28041

Failed: [k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422433750>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain a functioning cluster [Feature:NodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:69
Expected error:
    <*errors.errorString | 0xc4221f6ed0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.270+cd380b580b62a9 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.270+cd380b580b62a9 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:61

Failed: [k8s.io] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421b38020>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33897 #37458

Failed: [k8s.io] [Feature:Federation] Federated Services DNS should be able to discover a federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42156b580>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation events [Feature:Federation] Event objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420c84580>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #30644 #30831

Failed: [k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access[Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:596
Expected error:
    <*errors.errorString | 0xc4231f7970>: {
        s: "gave up waiting for pod 'write-pod-x6cqd' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-x6cqd' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42076f040>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42156bc20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:44
Apr 25 07:35:33.905: Failed to create node pool np-ssd: Err: exit status 1
ERROR: (gcloud.alpha.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:55

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow starting 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 260063851934
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*errors.errorString | 0xc4203c4d40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36288 #36913

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should create matching services in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422584650>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:39
Apr 25 08:41:19.198: Failed to create node pool "test-pool". Err: exit status 1
ERROR: (gcloud.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:53

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2553/
Multiple broken tests:

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422077990>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects all resources in the namespace should be deleted when namespace is deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421191d20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] [Feature:Federation] Federated Services DNS should be able to discover a federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421047450>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:39
Apr 25 16:45:52.121: Failed to create node pool "test-pool". Err: exit status 1
ERROR: (gcloud.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:53

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421dea280>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:44
Apr 25 17:48:40.084: Failed to create node pool np-ssd: Err: exit status 1
ERROR: (gcloud.alpha.container.node-pools.create) The required property [zone] is not currently set.
It can be set on a per-command basis by re-running your command with the [--zone] flag.

You may set it for your current workspace by running:

  $ gcloud config set compute/zone VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_COMPUTE_ZONE]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:55

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc420dd1250>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #28041

Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected
    <int>: 3
to equal
    <int>: 4
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:80

Issues about this test specifically: #34764

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:539
Expected error:
    <*errors.errorString | 0xc421509530>: {
        s: "gave up waiting for pod 'write-pod-lbxqd' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-lbxqd' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Expected error:
    <*errors.errorString | 0xc42256bfa0>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.308+21f30db4c68eae --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.308+21f30db4c68eae --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:45

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42299f120>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42198e190>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421508000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33897 #37458

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422537530>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc42028c120>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.290+fb72285a78bbc5 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --master --cluster-version=1.7.0-alpha.2.290+fb72285a78bbc5 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=Cluster master can only be upgraded to the latest allowed version. This does not upgrade the nodes..\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:91

Issues about this test specifically: #38172

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4215096c0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation secrets [Feature:Federation] Secret objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421098cd0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422077400>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc421906000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33891 #43360

Failed: [k8s.io] Federation deployments [Feature:Federation] Deployment objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42124bcd0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] etcd uUpgrade [Feature:EtcdUpgrade] [k8s.io] etcd upgrade should maintain a functioning cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:135
Expected error:
    <*errors.errorString | 0xc4210189a0>: {
        s: "EtcdUpgrade() is not implemented for provider gke",
    }
    EtcdUpgrade() is not implemented for provider gke
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:127

Failed: [k8s.io] Volumes [Feature:Volumes] [k8s.io] Ceph RBD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:591
Expected error:
    <*errors.errorString | 0xc4203d1470>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:254

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should create and update matching replicasets in underling clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42128a9d0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access [Flaky] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:547
Expected error:
    <*errors.errorString | 0xc421019000>: {
        s: "gave up waiting for pod 'write-pod-t0k5z' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'write-pod-t0k5z' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:371

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain responsive services [Feature:ExperimentalNodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:83
Expected error:
    <*errors.errorString | 0xc421098250>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.298+708d30a8d10d36 --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.298+708d30a8d10d36 --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:75

Failed: [k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostIPC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:104
Expected
    <string>: hello
    
not to contain substring
    <string>: hello
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/security_context.go:233

Failed: [k8s.io] [Feature:Federation] Federated Services DNS non-local federated service should be able to discover a non-local federated service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421509c10>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #27739

Failed: [k8s.io] [Feature:Federation] Federated Services Service creation should create matching services in underlying clusters {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421854260>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421e093b0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:205
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] Federation namespace [Feature:Federation] Namespace objects should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4210461e0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:218
Expected error:
    <*errors.StatusError | 0xc4226a0b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"nfs-server\" not found",
            Reason: "NotFound",
            Details: {Name: "nfs-server", Group: "", Kind: "pods", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    pods "nfs-server" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:61

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4211964a0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33804

Failed: [k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:78
Apr 25 20:26:34.937: Failed to GC pods within 2m0s, 1000 pods remaining, error: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pod_gc.go:76

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421d865a0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:86
Node gke-bootstrap-e2e-default-pool-8471c34d-ts56 did not run out of disk within 5m0s
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/nodeoutofdisk.go:251

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain a functioning cluster [Feature:NodeUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:69
Expected error:
    <*errors.errorString | 0xc420e34150>: {
        s: "error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.308+21f30db4c68eae --quiet]; got error exit status 1, stdout \"\", stderr \"ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\\n\"",
    }
    error running gcloud [container clusters --project=k8s-jkns-e2e-gci-gke-staging --zone=us-central1-f upgrade bootstrap-e2e --cluster-version=1.7.0-alpha.2.308+21f30db4c68eae --quiet]; got error exit status 1, stdout "", stderr "ERROR: (gcloud.container.clusters.upgrade) ResponseError: code=400, message=bad desired node version ().\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:61

Failed: [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses Ingress connectivity and DNS should be able to connect to a federated ingress via its load balancer {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422a16a20>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #32732 #35392 #36074

Failed: [k8s.io] Density [Feature:ManualPerformance] should allow running maximum capacity pods on nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:302
Expected
    <time.Duration>: 160054471440
not to be >
    <time.Duration>: 120000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/density.go:282

Failed: [k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421debee0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation events [Feature:Federation] Event objects should be created and deleted successfully {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421114410>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Issues about this test specifically: #30644 #30831

Failed: [k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42156c460>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc4217f4000>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #33754

Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:90
Expected error:
    <*errors.errorString | 0xc420e342f0>: {
        s: "autoscaler not enabled",
    }
    autoscaler not enabled
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:87

Issues about this test specifically: #34102

Failed: [k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc421866c10>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

Failed: [k8s.io] Federation deployments [Feature:Federation] Federated Deployment should be deleted from underlying clusters when OrphanDependents is false {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc422036cb0>: {
        s: "error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]",
    }
    error creating federation client config: invalid configuration: [context was not found for specified context: federation-cluster, cluster has no server defined]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:211

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2567/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4229d2e00>: {
        s: "Namespace e2e-tests-services-fj04w is active",
    }
    Namespace e2e-tests-services-fj04w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42351d800>: {
        s: "Namespace e2e-tests-services-fj04w is active",
    }
    Namespace e2e-tests-services-fj04w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d08710>: {
        s: "Namespace e2e-tests-services-fj04w is active",
    }
    Namespace e2e-tests-services-fj04w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42299dd00>: {
        s: "Namespace e2e-tests-services-fj04w is active",
    }
    Namespace e2e-tests-services-fj04w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc422317d10>: {
        Op: "read",
        Net: "tcp",
        Source: {IP: [172, 17, 0, 2], Port: 40615, Zone: ""},
        Addr: {IP: "#\xbcQ\xe9", Port: 443, Zone: ""},
        Err: {Syscall: "read", Err: 0x68},
    }
    read tcp 172.17.0.2:40615->35.188.81.233:443: read: connection reset by peer
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1780

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422912f10>: {
        s: "Namespace e2e-tests-services-fj04w is active",
    }
    Namespace e2e-tests-services-fj04w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42263d0c0>: {
        s: "Namespace e2e-tests-services-fj04w is active",
    }
    Namespace e2e-tests-services-fj04w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42220ab40>: {
        s: "Namespace e2e-tests-services-fj04w is active",
    }
    Namespace e2e-tests-services-fj04w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.81.233 --kubeconfig=/workspace/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-43hb4] []  0xc420f70de0  warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nError from server (InternalError): error when stopping \"STDIN\": an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-43hb4/deployments/frontend\\\"\") has prevented the request from succeeding (get deployments.extensions frontend)\n [] <nil> 0xc42129a720 exit status 1 <nil> <nil> true [0xc42068c318 0xc42068c350 0xc42068c368] [0xc42068c318 0xc42068c350 0xc42068c368] [0xc42068c320 0xc42068c340 0xc42068c358] [0x9746f0 0x9747f0 0x9747f0] 0xc421e75ec0 <nil>}:\nCommand stdout:\n\nstderr:\nwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nError from server (InternalError): error when stopping \"STDIN\": an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-43hb4/deployments/frontend\\\"\") has prevented the request from succeeding (get deployments.extensions frontend)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.81.233 --kubeconfig=/workspace/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-43hb4] []  0xc420f70de0  warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
    Error from server (InternalError): error when stopping "STDIN": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-43hb4/deployments/frontend\"") has prevented the request from succeeding (get deployments.extensions frontend)
     [] <nil> 0xc42129a720 exit status 1 <nil> <nil> true [0xc42068c318 0xc42068c350 0xc42068c368] [0xc42068c318 0xc42068c350 0xc42068c368] [0xc42068c320 0xc42068c340 0xc42068c358] [0x9746f0 0x9747f0 0x9747f0] 0xc421e75ec0 <nil>}:
    Command stdout:
    
    stderr:
    warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
    Error from server (InternalError): error when stopping "STDIN": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-43hb4/deployments/frontend\"") has prevented the request from succeeding (get deployments.extensions frontend)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2077

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422921250>: {
        s: "Namespace e2e-tests-services-fj04w is active",
    }
    Namespace e2e-tests-services-fj04w is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2568/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #34250

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32375

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33285

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Apr 28 04:49:45.700: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Apr 28 01:38:37.423: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Apr 28 05:48:47.395: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Apr 28 00:19:14.890: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:246
Apr 28 01:09:08.041: Pods on node gke-bootstrap-e2e-default-pool-51243d3d-0ssg are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:182

Issues about this test specifically: #36794

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #34104

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Apr 28 02:00:28.275: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:69
Expected error:
    <*errors.errorString | 0xc42163e1c0>: {
        s: "Error while waiting for Deployment kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels \"k8s-app=kubernetes-dashboard\" to be running",
    }
    Error while waiting for Deployment kubernetes-dashboard pods to be running: Timeout while waiting for pods with labels "k8s-app=kubernetes-dashboard" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Apr 28 01:06:22.752: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4211b31c0>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]\nkube-dns-2185667875-2595h             gke-bootstrap-e2e-default-pool-51243d3d-0ssg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]\nkube-dns-autoscaler-395097547-2w7h8   gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:21 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]\nkubernetes-dashboard-3543765157-rbbcq gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:16 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]
    kube-dns-2185667875-2595h             gke-bootstrap-e2e-default-pool-51243d3d-0ssg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]
    kube-dns-autoscaler-395097547-2w7h8   gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:21 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]
    kubernetes-dashboard-3543765157-rbbcq gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:16 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Apr 28 06:00:44.205: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1582

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b70fd0>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]\nkube-dns-2185667875-2595h             gke-bootstrap-e2e-default-pool-51243d3d-0ssg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]\nkube-dns-autoscaler-395097547-2w7h8   gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:21 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]\nkubernetes-dashboard-3543765157-rbbcq gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:16 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]
    kube-dns-2185667875-2595h             gke-bootstrap-e2e-default-pool-51243d3d-0ssg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]
    kube-dns-autoscaler-395097547-2w7h8   gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:21 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]
    kubernetes-dashboard-3543765157-rbbcq gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:16 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422164ec0>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]\nkube-dns-2185667875-8x994             gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT ContainersNotReady containers with unready status: [dnsmasq-metrics]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  }]\nkubernetes-dashboard-3543765157-g8ppf gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]
    kube-dns-2185667875-8x994             gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT ContainersNotReady containers with unready status: [dnsmasq-metrics]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  }]
    kubernetes-dashboard-3543765157-g8ppf gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Apr 28 00:01:06.748: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Apr 28 04:32:02.234: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc421aec320>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]\nkube-dns-2185667875-8x994             gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT ContainersNotReady containers with unready status: [dnsmasq-metrics]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  }]\nkubernetes-dashboard-3543765157-g8ppf gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]
    kube-dns-2185667875-8x994             gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT ContainersNotReady containers with unready status: [dnsmasq-metrics]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  }]
    kubernetes-dashboard-3543765157-g8ppf gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4219c3270>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]\nkube-dns-2185667875-8x994             gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT ContainersNotReady containers with unready status: [dnsmasq-metrics]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  }]\nkubernetes-dashboard-3543765157-g8ppf gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]
    kube-dns-2185667875-8x994             gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT ContainersNotReady containers with unready status: [dnsmasq-metrics]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  }]
    kubernetes-dashboard-3543765157-g8ppf gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33887

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a1b5c0>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]\nkube-dns-2185667875-8x994             gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT ContainersNotReady containers with unready status: [dnsmasq-metrics]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  }]\nkubernetes-dashboard-3543765157-g8ppf gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]
    kube-dns-2185667875-8x994             gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT ContainersNotReady containers with unready status: [dnsmasq-metrics]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  }]
    kubernetes-dashboard-3543765157-g8ppf gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #36178

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d1b010>: {
        s: "4 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]\nkube-dns-2185667875-2595h             gke-bootstrap-e2e-default-pool-51243d3d-0ssg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]\nkube-dns-autoscaler-395097547-2w7h8   gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:21 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]\nkubernetes-dashboard-3543765157-rbbcq gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:16 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]\n",
    }
    4 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]
    kube-dns-2185667875-2595h             gke-bootstrap-e2e-default-pool-51243d3d-0ssg Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:17 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]
    kube-dns-autoscaler-395097547-2w7h8   gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:21 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]
    kubernetes-dashboard-3543765157-rbbcq gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:16 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:07 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #36271

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42254cdf0>: {
        s: "3 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]\nkube-dns-2185667875-8x994             gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT ContainersNotReady containers with unready status: [dnsmasq-metrics]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  }]\nkubernetes-dashboard-3543765157-g8ppf gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  }]\n",
    }
    3 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-p833p    gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:43 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-27 22:24:22 -0700 PDT  }]
    kube-dns-2185667875-8x994             gke-bootstrap-e2e-default-pool-51243d3d-lgk4 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT ContainersNotReady containers with unready status: [dnsmasq-metrics]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 03:14:53 -0700 PDT  }]
    kubernetes-dashboard-3543765157-g8ppf gke-bootstrap-e2e-default-pool-51243d3d-mc8d Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 02:58:24 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc420415280>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450 #43094

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2571/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c7c340>: {
        s: "4 / 13 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-35245bb5-gm1w gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:46:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]\nkube-dns-2185667875-bnn8f                                          gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-35245bb5-gm1w            gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:55 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]\nl7-default-backend-2234341178-n8tm5                                gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:42 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]\n",
    }
    4 / 13 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-35245bb5-gm1w gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:46:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]
    kube-dns-2185667875-bnn8f                                          gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-35245bb5-gm1w            gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:55 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]
    l7-default-backend-2234341178-n8tm5                                gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:42 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:12:08.153: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b7f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 02:55:18.504: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e76278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:56:52.801: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e1d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:11:27.080: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a0ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:53:35.778: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42178cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 02:06:47.050: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42142ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37500

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:42:39.507: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214bd678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27957

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420fdc720>: {
        s: "4 / 13 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-35245bb5-gm1w gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:46:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]\nkube-dns-2185667875-bnn8f                                          gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-35245bb5-gm1w            gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:55 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]\nl7-default-backend-2234341178-n8tm5                                gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:42 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]\n",
    }
    4 / 13 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-35245bb5-gm1w gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:46:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]
    kube-dns-2185667875-bnn8f                                          gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-35245bb5-gm1w            gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:55 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]
    l7-default-backend-2234341178-n8tm5                                gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:42 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 02:35:49.504: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c14278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:04:32.299: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42101f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 05:19:00.914: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4200f6c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 02:11:18.712: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421733678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 04:24:36.657: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc423398278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36554

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 05:36:39.051: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d22c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:37:11.358: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fd9678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36564

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:30:26.635: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421caf678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:59:12.867: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b24c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27245

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 02:17:53.401: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42178d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 03:53:26.439: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421885678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 04:03:16.389: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fc5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 04:43:09.880: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216c8278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 05:15:17.061: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216c8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:55:48.611: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c2ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:42:25.425: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42158e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 04:49:56.630: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a0ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:04:55.879: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210fec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 02:30:20.275: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4208ec278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29467

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:50:39.184: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42176c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:20:28.272: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b94c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 03:59:49.139: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223b6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:31:13.728: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e78278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:37:40.455: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42146cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc4203acda0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 02:03:33.054: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fd2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 03:50:11.066: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421609678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:49:19.070: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214d9678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:23:48.597: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42169a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32023

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:27:31.561: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4200f6c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36950

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:46:07.730: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42226ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 03:11:01.436: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42178d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 05:00:35.008: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422c12278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:35:21.446: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d42c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-35245bb5-gm1w\" is not ready yet",
        },
    ]
    Resource usage on node "gke-bootstrap-e2e-default-pool-35245bb5-gm1w" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:01:08.566: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420210c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 02:00:11.815: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42169a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26191

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:38:35.914: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a14c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 02:52:03.635: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b88278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 03:43:59.423: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421626278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37027

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:54:21.634: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c57678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30632

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 04:39:50.250: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42188ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35601

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:08:34.443: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216b1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 04:53:26.441: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a5f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 04:46:26.836: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216c2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:17:14.233: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420740c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 05:03:53.465: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a0f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 04:21:08.345: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cea278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 05:26:35.564: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210ae278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 04:33:27.489: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42169a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:08:07.770: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421826278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 05:29:54.458: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e20c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 04:56:39.821: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a4ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35790

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 05:23:21.809: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b1d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b88f20>: {
        s: "4 / 13 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-35245bb5-gm1w gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:46:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]\nkube-dns-2185667875-bnn8f                                          gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-35245bb5-gm1w            gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:55 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]\nl7-default-backend-2234341178-n8tm5                                gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:42 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]\n",
    }
    4 / 13 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-35245bb5-gm1w gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:46:41 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]
    kube-dns-2185667875-bnn8f                                          gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:45 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-35245bb5-gm1w            gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:55 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 21:44:53 -0700 PDT  }]
    l7-default-backend-2234341178-n8tm5                                gke-bootstrap-e2e-default-pool-35245bb5-gm1w Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:42 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 23:00:35 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:46:10.027: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a5ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 01:27:02.163: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221c1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37525

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 02:58:34.964: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216c3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #42724

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 05:11:51.282: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b40278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:57:55.018: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42195b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 04:30:10.013: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214d8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28283

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421deeb90>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-35245bb5-gm1w boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-35245bb5-gm1w boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-staging/2584/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 06:24:22.289: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b4b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 03:33:32.919: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42128a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 08:49:36.804: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214f5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 07:56:01.216: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420641678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 03:43:41.277: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4208bf678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 06:27:36.018: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420866c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc42038ad40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 05:58:57.666: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210ed678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31400

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 08:20:24.959: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217b6c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 08:08:57.364: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b6b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37531

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 07:17:26.608: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220fd678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 08:05:44.053: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a7c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc421b30280>: {
        Op: "read",
        Net: "tcp",
        Source: {IP: [172, 17, 0, 5], Port: 50705, Zone: ""},
        Addr: {IP: "#\xb8\xc8n", Port: 443, Zone: ""},
        Err: {Syscall: "read", Err: 0x68},
    }
    read tcp 172.17.0.5:50705->35.184.200.110:443: read: connection reset by peer
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1780

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 03:25:30.243: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421236c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 03:59:11.684: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42162d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 05:30:25.617: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210a3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35793

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Expected error:
    <*errors.errorString | 0xc42038ad40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36271

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 03:40:12.543: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420feac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 06:48:13.858: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b39678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:34
Expected error:
    <*errors.errorString | 0xc421860fc0>: {
        s: "failed to get logs from client-containers-cb2f3a76-2fe5-11e7-995a-0242ac110005 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods client-containers-cb2f3a76-2fe5-11e7-995a-0242ac110005)",
    }
    failed to get logs from client-containers-cb2f3a76-2fe5-11e7-995a-0242ac110005 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods client-containers-cb2f3a76-2fe5-11e7-995a-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #34520

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 05:09:41.019: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421517678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 03:30:19.077: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210c4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42143a280>: {
        s: "Namespace e2e-tests-services-j043k is active",
    }
    Namespace e2e-tests-services-j043k is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 05:19:52.365: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42128a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 04:06:08.842: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a9d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30632

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 05:16:38.299: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210ecc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc42038ad40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 07:20:52.801: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421910c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 08:17:11.348: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c0c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 07:07:35.860: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206ea278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37479

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216e47b0>: {
        s: "Namespace e2e-tests-services-j043k is active",
    }
    Namespace e2e-tests-services-j043k is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 02:59:16.769: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214f4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 03:55:56.128: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421be8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 07:11:02.306: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b2d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428 #40256

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 04:09:42.671: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42121d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 08:45:22.898: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b0cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35601

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 06:35:16.431: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421178c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42194eff0>: {
        s: "Namespace e2e-tests-services-j043k is active",
    }
    Namespace e2e-tests-services-j043k is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc4220bf0a0>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 08:30:12.825: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a7d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 04:02:29.472: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a6ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 05:37:17.809: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216d8278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 05:13:24.617: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210c5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 05:33:59.877: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42121d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 05:23:59.371: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421236c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 06:41:42.913: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218cc278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37439

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 04:19:44.936: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421236c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 07:52:47.767: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219f0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 08:26:56.123: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420705678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 07:36:20.803: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4208bf678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 05:27:12.809: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42122a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35256

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 07:27:22.070: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42189cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 04:16:11.041: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b0ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 03:18:49.602: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216b4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37259

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 06:45:00.370: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421adcc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 04:12:57.610: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e22278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 04:32:48.184: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42192ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26134 #43340

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 06:52:16.432: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42170e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 04:22:59.950: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218dac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Flaky\]|\[Feature:.+\]|NFSv3: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 03:22:16.067: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42128a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 08:42:11.054: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421aa5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 06:31:43.152: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42121c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 05:40:31.246: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421664c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 07:32:57.939: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b17678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 08:35:47.702: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202f3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32945

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 03:49:09.081: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b6a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37914

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 03:52:27.741: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420feb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
May  3 08:23:42.441: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420645678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29040 #35756

@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 51 days. It will be closed in 38 days (Jun 12, 2017).

cc @k8s-merge-robot @spxtr

You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days

@spiffxp
Copy link
Member

spiffxp commented May 31, 2017

/sig testing
/assign

I'm going to close this given how inactive it's been

@k8s-ci-robot k8s-ci-robot added the sig/testing Categorizes an issue or PR as relevant to SIG Testing. label May 31, 2017
@spiffxp
Copy link
Member

spiffxp commented May 31, 2017

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
None yet
Development

No branches or pull requests

6 participants