Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gke-multizone: broken test run #38024

Closed
k8s-github-robot opened this issue Dec 3, 2016 · 693 comments
Closed

ci-kubernetes-e2e-gci-gke-multizone: broken test run #38024

k8s-github-robot opened this issue Dec 3, 2016 · 693 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/network Categorizes an issue or PR as relevant to SIG Network.
Milestone

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/978/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 3, 2016
@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1145/

Multiple broken tests:

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 2h30m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974

Failed: TearDown {e2e.go}

Terminate testing after 15m after 2h30m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058

Failed: DiffResources {e2e.go}

Error: 28 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-0c7dce54  n1-standard-2               2016-12-05T21:38:06.205-08:00
+gke-bootstrap-e2e-default-pool-657dc85f  n1-standard-2               2016-12-05T21:38:06.113-08:00
+gke-bootstrap-e2e-default-pool-9c23825b  n1-standard-2               2016-12-05T21:38:06.173-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-0c7dce54-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+gke-bootstrap-e2e-default-pool-0c7dce54-13lr  us-central1-f  n1-standard-2               10.240.0.4   104.198.227.107  RUNNING
+gke-bootstrap-e2e-default-pool-0c7dce54-jlbx  us-central1-f  n1-standard-2               10.240.0.2   104.154.149.175  RUNNING
+gke-bootstrap-e2e-default-pool-0c7dce54-v99e  us-central1-f  n1-standard-2               10.240.0.3   104.198.234.132  RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-0c7dce54-13lr  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-0c7dce54-jlbx  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-0c7dce54-v99e  us-central1-f  100      pd-standard  READY
+default-route-73acc0a0f92ce9a4                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
+default-route-7ef949067be29a75                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+gke-bootstrap-e2e-6abfff86-3317baa3-bb77-11e6-81da-42010af00012  bootstrap-e2e  10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-0c7dce54-v99e  1000
+gke-bootstrap-e2e-6abfff86-33832b06-bb77-11e6-81da-42010af00012  bootstrap-e2e  10.72.3.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-9c23825b-m3bl  1000
+gke-bootstrap-e2e-6abfff86-33d540d3-bb77-11e6-81da-42010af00012  bootstrap-e2e  10.72.4.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-9c23825b-l4o3  1000
+gke-bootstrap-e2e-6abfff86-34087100-bb77-11e6-81da-42010af00012  bootstrap-e2e  10.72.5.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-657dc85f-cj5j  1000
+gke-bootstrap-e2e-6abfff86-34095ae7-bb77-11e6-81da-42010af00012  bootstrap-e2e  10.72.6.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-9c23825b-g9vn  1000
+gke-bootstrap-e2e-6abfff86-34340836-bb77-11e6-81da-42010af00012  bootstrap-e2e  10.72.7.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-0c7dce54-jlbx  1000
+gke-bootstrap-e2e-6abfff86-34b251bb-bb77-11e6-81da-42010af00012  bootstrap-e2e  10.72.8.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-0c7dce54-13lr  1000
+gke-bootstrap-e2e-6abfff86-34e751ea-bb77-11e6-81da-42010af00012  bootstrap-e2e  10.72.1.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-657dc85f-310m  1000
+gke-bootstrap-e2e-6abfff86-34fb4fcb-bb77-11e6-81da-42010af00012  bootstrap-e2e  10.72.2.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-657dc85f-4ehr  1000
+gke-bootstrap-e2e-6abfff86-all  bootstrap-e2e  10.72.0.0/14       udp,icmp,esp,ah,sctp,tcp
+gke-bootstrap-e2e-6abfff86-ssh  bootstrap-e2e  130.211.160.57/32  tcp:22                                  gke-bootstrap-e2e-6abfff86-node
+gke-bootstrap-e2e-6abfff86-vms  bootstrap-e2e  10.240.0.0/16      tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-6abfff86-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 2h30m0s timeout during teardown

Issues about this test specifically: #35658

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1175/

Multiple broken tests:

Failed: TearDown {e2e.go}

Terminate testing after 15m after 2h30m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 28 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-152a8f74  n1-standard-2               2016-12-06T12:32:19.873-08:00
+gke-bootstrap-e2e-default-pool-6f5c76ec  n1-standard-2               2016-12-06T12:32:19.696-08:00
+gke-bootstrap-e2e-default-pool-bbb0573e  n1-standard-2               2016-12-06T12:32:19.772-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-6f5c76ec-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+gke-bootstrap-e2e-default-pool-6f5c76ec-8gwp  us-central1-f  n1-standard-2               10.240.0.2   104.154.145.85   RUNNING
+gke-bootstrap-e2e-default-pool-6f5c76ec-9xri  us-central1-f  n1-standard-2               10.240.0.4   104.198.128.246  RUNNING
+gke-bootstrap-e2e-default-pool-6f5c76ec-tkhb  us-central1-f  n1-standard-2               10.240.0.3   104.198.140.186  RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-6f5c76ec-8gwp  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-6f5c76ec-9xri  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-6f5c76ec-tkhb  us-central1-f  100      pd-standard  READY
+default-route-18eccb7942b995e4                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+default-route-9b702fabd55a432e                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
+gke-bootstrap-e2e-c0013bea-5784b0f6-bbf3-11e6-a959-42010af00044  bootstrap-e2e  10.72.6.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-bbb0573e-uefe  1000
+gke-bootstrap-e2e-c0013bea-57c6dca3-bbf3-11e6-a959-42010af00044  bootstrap-e2e  10.72.2.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-bbb0573e-6iau  1000
+gke-bootstrap-e2e-c0013bea-5833a7f5-bbf3-11e6-a959-42010af00044  bootstrap-e2e  10.72.7.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-152a8f74-hr00  1000
+gke-bootstrap-e2e-c0013bea-584895da-bbf3-11e6-a959-42010af00044  bootstrap-e2e  10.72.3.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-6f5c76ec-tkhb  1000
+gke-bootstrap-e2e-c0013bea-584af563-bbf3-11e6-a959-42010af00044  bootstrap-e2e  10.72.4.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-6f5c76ec-9xri  1000
+gke-bootstrap-e2e-c0013bea-58658016-bbf3-11e6-a959-42010af00044  bootstrap-e2e  10.72.8.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-bbb0573e-xdg5  1000
+gke-bootstrap-e2e-c0013bea-588a72b6-bbf3-11e6-a959-42010af00044  bootstrap-e2e  10.72.5.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-6f5c76ec-8gwp  1000
+gke-bootstrap-e2e-c0013bea-59006df9-bbf3-11e6-a959-42010af00044  bootstrap-e2e  10.72.0.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-152a8f74-zj6z  1000
+gke-bootstrap-e2e-c0013bea-59a0bb5d-bbf3-11e6-a959-42010af00044  bootstrap-e2e  10.72.1.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-152a8f74-0199  1000
+gke-bootstrap-e2e-c0013bea-all  bootstrap-e2e  10.72.0.0/14        tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-c0013bea-ssh  bootstrap-e2e  104.198.147.135/32  tcp:22                                  gke-bootstrap-e2e-c0013bea-node
+gke-bootstrap-e2e-c0013bea-vms  bootstrap-e2e  10.240.0.0/16       icmp,tcp:1-65535,udp:1-65535            gke-bootstrap-e2e-c0013bea-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 2h30m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 2h30m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1238/

Multiple broken tests:

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 2h30m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 2h30m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 28 leaked resources
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-6cc2475d  n1-standard-2               2016-12-07T16:24:59.702-08:00
+gke-bootstrap-e2e-default-pool-978452a9  n1-standard-2               2016-12-07T16:24:59.791-08:00
+gke-bootstrap-e2e-default-pool-c2a754cb  n1-standard-2               2016-12-07T16:24:59.746-08:00
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-c2a754cb-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
+gke-bootstrap-e2e-default-pool-c2a754cb-1ttv  us-central1-f  n1-standard-2               10.240.0.4   35.184.59.127  RUNNING
+gke-bootstrap-e2e-default-pool-c2a754cb-f2n3  us-central1-f  n1-standard-2               10.240.0.2   35.184.65.115  RUNNING
+gke-bootstrap-e2e-default-pool-c2a754cb-zkqp  us-central1-f  n1-standard-2               10.240.0.3   35.184.72.3    RUNNING
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-c2a754cb-1ttv  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-c2a754cb-f2n3  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-c2a754cb-zkqp  us-central1-f  100      pd-standard  READY
+default-route-e7e8297d36893191                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+default-route-fa3a183dce73a46c                                   bootstrap-e2e  10.240.0.0/16                                                                        1000
+gke-bootstrap-e2e-2da47681-1e48dda4-bcdd-11e6-8cbb-42010af0004c  bootstrap-e2e  10.72.7.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-978452a9-gz1g  1000
+gke-bootstrap-e2e-2da47681-1ebafed8-bcdd-11e6-8cbb-42010af0004c  bootstrap-e2e  10.72.8.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-6cc2475d-tn5o  1000
+gke-bootstrap-e2e-2da47681-1f5c58a3-bcdd-11e6-8cbb-42010af0004c  bootstrap-e2e  10.72.1.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-c2a754cb-1ttv  1000
+gke-bootstrap-e2e-2da47681-1f76f9cd-bcdd-11e6-8cbb-42010af0004c  bootstrap-e2e  10.72.2.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-6cc2475d-s4tt  1000
+gke-bootstrap-e2e-2da47681-1f7e7b31-bcdd-11e6-8cbb-42010af0004c  bootstrap-e2e  10.72.3.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-978452a9-5eli  1000
+gke-bootstrap-e2e-2da47681-1f8eb840-bcdd-11e6-8cbb-42010af0004c  bootstrap-e2e  10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-c2a754cb-f2n3  1000
+gke-bootstrap-e2e-2da47681-20089fbe-bcdd-11e6-8cbb-42010af0004c  bootstrap-e2e  10.72.4.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-6cc2475d-i4fl  1000
+gke-bootstrap-e2e-2da47681-20531563-bcdd-11e6-8cbb-42010af0004c  bootstrap-e2e  10.72.5.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-978452a9-lw47  1000
+gke-bootstrap-e2e-2da47681-20781f20-bcdd-11e6-8cbb-42010af0004c  bootstrap-e2e  10.72.6.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-c2a754cb-zkqp  1000
+gke-bootstrap-e2e-2da47681-all  bootstrap-e2e  10.72.0.0/14       tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-2da47681-ssh  bootstrap-e2e  104.198.149.77/32  tcp:22                                  gke-bootstrap-e2e-2da47681-node
+gke-bootstrap-e2e-2da47681-vms  bootstrap-e2e  10.240.0.0/16      icmp,tcp:1-65535,udp:1-65535            gke-bootstrap-e2e-2da47681-node

Issues about this test specifically: #33373 #33416 #34060

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 2h30m0s timeout during teardown

Issues about this test specifically: #35658

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1433/

Multiple broken tests:

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
wait for pod "pod-secrets-466461f2-bfaa-11e6-8dd9-0242ac110005" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203d3380>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Expected error:
    <*errors.errorString | 0xc420ebc030>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1075

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:326
Dec 11 06:07:09.209: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1974

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*errors.errorString | 0xc42037c380>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 11 06:17:43.399: Couldn't delete ns: "e2e-tests-kubectl-fn2bf": namespace e2e-tests-kubectl-fn2bf was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-kubectl-fn2bf was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:401
Expected error:
    <*errors.errorString | 0xc4203fd580>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:237

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4204134b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #35283 #36867

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1555/

Multiple broken tests:

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 13 13:43:24.363: Couldn't delete ns: "e2e-tests-disruption-62xzw": namespace e2e-tests-disruption-62xzw was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-62xzw was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
wait for pod "pod-secrets-09590b42-c17c-11e6-a07c-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420413950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122

Issues about this test specifically: #29221

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*errors.errorString | 0xc4203bf570>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:56
wait for pod "pod-secrets-4d82bf26-c17c-11e6-8a2f-0242ac110007" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420415990>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122

Issues about this test specifically: #37529

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1727/

Multiple broken tests:

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420a06220>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1002

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Dec 17 08:36:50.464: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.72.2.27:8080/hostName
retrieved map[]
expected map[netserver-6:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33631 #33995 #34970

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1106
Dec 17 08:39:55.114: expected un-ready endpoint for Service webserver within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1104

Issues about this test specifically: #26172

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Dec 17 08:44:53.853: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.1.47:8080/dial?request=hostName&protocol=http&host=10.72.2.31&port=8080&tries=1'
retrieved map[]
expected map[netserver-6:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc420a98d30>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:00 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:31 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:00 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.72.2.32 StartTime:2016-12-17 08:32:00 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2016-12-17 08:32:00 -0800 PST,FinishedAt:2016-12-17 08:32:30 -0800 PST,ContainerID:docker://52c0a29c769d033c1a0cc0ab0ac9ed479390eefb89f854566145f19f69f20ffe,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://52c0a29c769d033c1a0cc0ab0ac9ed479390eefb89f854566145f19f69f20ffe}] QOSClass:}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:00 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:31 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:00 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.72.2.32 StartTime:2016-12-17 08:32:00 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2016-12-17 08:32:00 -0800 PST,FinishedAt:2016-12-17 08:32:30 -0800 PST,ContainerID:docker://52c0a29c769d033c1a0cc0ab0ac9ed479390eefb89f854566145f19f69f20ffe,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://52c0a29c769d033c1a0cc0ab0ac9ed479390eefb89f854566145f19f69f20ffe}] QOSClass:}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Expected error:
    <*errors.errorString | 0xc420415c50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #37056

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Dec 17 09:02:09.752: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.0.65:8080/dial?request=hostName&protocol=udp&host=10.72.2.49&port=8081&tries=1'
retrieved map[]
expected map[netserver-8:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:393
Expected error:
    <*errors.errorString | 0xc4203aaa20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #36970

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:360
Expected error:
    <*errors.errorString | 0xc420640140>: {
        s: "service verification failed for: 10.75.252.253\nexpected [service2-9zdj1 service2-r59nr service2-wmf46]\nreceived [wget: download timed out]",
    }
    service verification failed for: 10.75.252.253
    expected [service2-9zdj1 service2-r59nr service2-wmf46]
    received [wget: download timed out]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:335

Issues about this test specifically: #26128 #26685 #33408 #36298

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1784/

Multiple broken tests:

Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:190
Expected error:
    <*errors.errorString | 0xc420754e50>: {
        s: "err waiting for DNS replicas to satisfy 9, got 5: timed out waiting for the condition",
    }
    err waiting for DNS replicas to satisfy 9, got 5: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:189

Issues about this test specifically: #36569 #38446

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:360
Expected error:
    <*errors.errorString | 0xc420acc130>: {
        s: "service verification failed for: 10.75.253.22\nexpected [service1-dm43k service1-frn9h service1-pn8mz]\nreceived [wget: download timed out]",
    }
    service verification failed for: 10.75.253.22
    expected [service1-dm43k service1-frn9h service1-pn8mz]
    received [wget: download timed out]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:332

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Expected error:
    <*errors.errorString | 0xc42043b540>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #37056

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:348
Expected error:
    <*errors.errorString | 0xc4203fb780>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:308
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.25.6 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-xtfxz execpod-sourceip-gke-bootstrap-e2e-default-pool-6e4213d3-i535q8 -- /bin/sh -c wget -T 30 -qO- 10.75.240.148:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc4210f1f20 exit status 1 <nil> <nil> true [0xc4203ce638 0xc4203ce6c8 0xc4203ce730] [0xc4203ce638 0xc4203ce6c8 0xc4203ce730] [0xc4203ce698 0xc4203ce720] [0xbdb8f0 0xbdb8f0] 0xc420f81260 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.25.6 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-xtfxz execpod-sourceip-gke-bootstrap-e2e-default-pool-6e4213d3-i535q8 -- /bin/sh -c wget -T 30 -qO- 10.75.240.148:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc4210f1f20 exit status 1 <nil> <nil> true [0xc4203ce638 0xc4203ce6c8 0xc4203ce730] [0xc4203ce638 0xc4203ce6c8 0xc4203ce730] [0xc4203ce698 0xc4203ce720] [0xbdb8f0 0xbdb8f0] 0xc420f81260 <nil>}:
    Command stdout:
    
    stderr:
    wget: download timed out
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2912

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Dec 18 10:03:28.876: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.72.0.47:8080/hostName
retrieved map[]
expected map[netserver-5:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc420eeaac0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:478

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Dec 18 10:41:40.651: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.6.95:8080/dial?request=hostName&protocol=http&host=10.72.0.59&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1876/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Dec 20 08:45:33.310: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.72.5.19:8080/hostName
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Dec 20 08:43:51.808: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163

Issues about this test specifically: #32023

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:368
Dec 20 08:49:20.651: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1587

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc420905f20>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1002

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc4203c2270>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:336
Dec 20 08:46:17.425: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1962

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Dec 20 08:45:38.581: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.72.5.27 8081
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1966/
Multiple broken tests:

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc4202b6680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-scz2q--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-scz2q--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-scz2q--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc4206a0d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-d3f1q--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-d3f1q--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-d3f1q--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc4209a2900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-qq0kx--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-qq0kx--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-qq0kx--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1967/
Multiple broken tests:

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc4209fce80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-gw535--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-gw535--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-gw535--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420affd80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-hfdt1--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-hfdt1--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-hfdt1--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc420efee00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-kclth--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-kclth--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-kclth--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1968/
Multiple broken tests:

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Dec 22 07:27:42.080: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163

Issues about this test specifically: #32023

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc4203eba00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-k8gt3--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-k8gt3--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-k8gt3--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:336
Dec 22 07:28:21.383: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1962

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:360
Expected error:
    <*errors.errorString | 0xc420e44ed0>: {
        s: "service verification failed for: 10.75.254.238\nexpected [service1-ncsjp service1-r66w3 service1-sgjsl]\nreceived [service1-r66w3 service1-sgjsl]",
    }
    service verification failed for: 10.75.254.238
    expected [service1-ncsjp service1-r66w3 service1-sgjsl]
    received [service1-r66w3 service1-sgjsl]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:332

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc421361080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-6z2p0--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-6z2p0--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-6z2p0--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Dec 22 07:35:43.791: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.5.52:8080/dial?request=hostName&protocol=udp&host=10.72.3.68&port=8081&tries=1'
retrieved map[]
expected map[netserver-8:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Expected error:
    <*errors.errorString | 0xc4203c34f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68

Issues about this test specifically: #37056

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420387a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-8ml0l--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-8ml0l--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-8ml0l--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1969/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420d3ae80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-sv0tn--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-sv0tn--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-sv0tn--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc4202b5380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-1xdqf--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-1xdqf--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-1xdqf--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc4213fd380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-hmfrm--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-hmfrm--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-hmfrm--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1970/
Multiple broken tests:

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc42039e380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-lr8mg--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-lr8mg--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-lr8mg--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc42047d400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-w9cks--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-w9cks--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-w9cks--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420e4a480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-xsbbp--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-xsbbp--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-xsbbp--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1971/
Multiple broken tests:

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc420e18500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-4whzq--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-ingress-4whzq--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-4whzq--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97

Issues about this test specifically: #38556

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420372300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-cxp2z--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-node-problem-detector-cxp2z--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-cxp2z--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc4202bff80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-q9zhk--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "e2e-tests-prestop-q9zhk--cluster-admin",
                Group: "rbac.authorization.k8s.io",
                Kind: "clusterrolebindings",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-q9zhk--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com  [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190

Issues about this test specifically: #30287 #35953

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1972/
Multiple broken tests:

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc42039e100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102

Issues about this test specifically: #38556

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc4201cf500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195

Issues about this test specifically: #30287 #35953

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420dcfe00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1973/
Multiple broken tests:

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc421261000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc421014680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102

Issues about this test specifically: #38556

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc4210b7180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195

Issues about this test specifically: #30287 #35953

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1974/
Multiple broken tests:

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc421262200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc421079680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc4211b5400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102

Issues about this test specifically: #38556

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1975/
Multiple broken tests:

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
    <*errors.StatusError | 0xc420fa9b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
    <*errors.StatusError | 0xc420237000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
    <*errors.StatusError | 0xc4201ee380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102

Issues about this test specifically: #38556

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7627/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4207b3660>: {
        s: "pod \"pvc-tester-jfds0\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-jfds0" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc420ae7ea0>: {
        s: "pod \"pvc-tester-351vd\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-351vd" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4213218f0>: {
        s: "pod \"pvc-tester-zvr0r\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-zvr0r" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7628/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4213e1c10>: {
        s: "pod \"pvc-tester-b1cwp\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-b1cwp" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc42120e150>: {
        s: "pod \"pvc-tester-9jb8b\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-9jb8b" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4210b8700>: {
        s: "pod \"pvc-tester-nlmn5\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-nlmn5" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7629/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc420fb0010>: {
        s: "pod \"pvc-tester-8c3ng\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-8c3ng" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4207cf4d0>: {
        s: "pod \"pvc-tester-fwprm\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-fwprm" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4208291f0>: {
        s: "pod \"pvc-tester-p8rhm\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-p8rhm" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7630/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4218255d0>: {
        s: "pod \"pvc-tester-v39gp\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-v39gp" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc421904dc0>: {
        s: "pod \"pvc-tester-llwwj\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-llwwj" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc420b8d2b0>: {
        s: "pod \"pvc-tester-v5367\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-v5367" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7642/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4201baf60>: {
        s: "pod \"pvc-tester-k02tm\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-k02tm" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:125
Timed out after 120.001s.
Expected
    <string>: content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    
to contain substring
    <string>: key3="value3"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124

Issues about this test specifically: #43335

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc42176b570>: {
        s: "pod \"pvc-tester-ffctg\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-ffctg" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7644/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4217c3cd0>: {
        s: "pod \"pvc-tester-7dhlf\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-7dhlf" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc421178960>: {
        s: "pod \"pvc-tester-mrkf7\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-mrkf7" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc42120a730>: {
        s: "pod \"pvc-tester-6rrsw\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-6rrsw" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7648/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:641
Apr 19 06:08:06.079: Unexpected kubectl exec output: %!(EXTRA string=I0419 13:08:05.973268      64 merged_client_builder.go:122] Using in-cluster configuration
I0419 13:08:05.974353      64 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/servergroups.json
I0419 13:08:05.974614      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json
I0419 13:08:05.974789      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/autoscaling/v1/serverresources.json
I0419 13:08:05.974959      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/batch/v1/serverresources.json
I0419 13:08:05.975068      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/storage.k8s.io/v1/serverresources.json
I0419 13:08:05.975236      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/storage.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.975375      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/authorization.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.975518      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/certificates.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.975841      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/extensions/v1beta1/serverresources.json
I0419 13:08:05.975959      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/policy/v1beta1/serverresources.json
I0419 13:08:05.976135      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.976313      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/apps/v1beta1/serverresources.json
I0419 13:08:05.976828      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/v1/serverresources.json
I0419 13:08:05.977141      64 merged_client_builder.go:122] Using in-cluster configuration
I0419 13:08:05.977639      64 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/servergroups.json
I0419 13:08:05.977815      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json
I0419 13:08:05.977981      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/autoscaling/v1/serverresources.json
I0419 13:08:05.978098      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/batch/v1/serverresources.json
I0419 13:08:05.978242      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/storage.k8s.io/v1/serverresources.json
I0419 13:08:05.978362      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/storage.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.978496      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/authorization.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.978673      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/certificates.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.978975      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/extensions/v1beta1/serverresources.json
I0419 13:08:05.979096      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/policy/v1beta1/serverresources.json
I0419 13:08:05.979242      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.979437      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/apps/v1beta1/serverresources.json
I0419 13:08:05.979919      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/v1/serverresources.json
I0419 13:08:05.980274      64 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/servergroups.json
I0419 13:08:05.980438      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json
I0419 13:08:05.980597      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/autoscaling/v1/serverresources.json
I0419 13:08:05.980719      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/batch/v1/serverresources.json
I0419 13:08:05.980857      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/storage.k8s.io/v1/serverresources.json
I0419 13:08:05.980986      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/storage.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.981121      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/authorization.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.981245      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/certificates.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.981543      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/extensions/v1beta1/serverresources.json
I0419 13:08:05.981697      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/policy/v1beta1/serverresources.json
I0419 13:08:05.981886      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.982031      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/apps/v1beta1/serverresources.json
I0419 13:08:05.982520      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/v1/serverresources.json
I0419 13:08:05.982862      64 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/servergroups.json
I0419 13:08:05.983026      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json
I0419 13:08:05.983133      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/autoscaling/v1/serverresources.json
I0419 13:08:05.983264      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/batch/v1/serverresources.json
I0419 13:08:05.983359      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/storage.k8s.io/v1/serverresources.json
I0419 13:08:05.983469      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/storage.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.983642      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/authorization.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.983827      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/certificates.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.984100      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/extensions/v1beta1/serverresources.json
I0419 13:08:05.984254      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/policy/v1beta1/serverresources.json
I0419 13:08:05.984392      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json
I0419 13:08:05.984554      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/apps/v1beta1/serverresources.json
I0419 13:08:05.985072      64 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.75.240.1_443/v1/serverresources.json
I0419 13:08:05.986036      64 merged_client_builder.go:122] Using in-cluster configuration
I0419 13:08:05.986594      64 merged_client_builder.go:122] Using in-cluster configuration
I0419 13:08:06.068694      64 round_trippers.go:417] GET https://10.75.240.1:443/api/v1/namespaces/invalid/pods 200 OK in 81 milliseconds
No resources found.
)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:639

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc420a34bb0>: {
        s: "pod \"pvc-tester-v32bw\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-v32bw" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc420fa4610>: {
        s: "pod \"pvc-tester-5t7zp\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-5t7zp" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7651/
Multiple broken tests:

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc421256760>: {
        s: "pod \"pvc-tester-n9vf9\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-n9vf9" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4218ba2c0>: {
        s: "pod \"pvc-tester-djwtz\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-djwtz" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7657/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4213728c0>: {
        s: "pod \"pvc-tester-q2zj0\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-q2zj0" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc420afd460>: {
        s: "pod \"pvc-tester-vzl97\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-vzl97" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc420c14800>: {
        s: "pod \"pvc-tester-gmb60\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-gmb60" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7661/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:40:05.953: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:42:38.324: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:49:40.258: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #37526

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:30:20.858: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:43:05.124: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:41:49.057: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] HostPath should support r/w [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:49:59.318: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:46:52.788: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #30632

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:43:06.258: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:34:15.089: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28084

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:30:45.690: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] ReplicationController should release no longer matching pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:26:47.074: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:47:57.905: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #37914

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:40:56.726: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #31936

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:30:33.192: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #29994

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:50:39.313: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #37428 #40256

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:51:10.611: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:36:52.992: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Secrets should be consumable via the environment [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:26:43.517: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:48:00.874: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #35473

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:47:28.509: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203fa1b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:39:50.065: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:33:13.790: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:27:04.839: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #27195

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203e5880>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #32830

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:45:05.818: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] PrivilegedPod should enable privileged commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:26:40.286: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:49:51.119: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #32936

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:43:22.963: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:36:33.308: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #37435

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:482
Apr 19 11:23:45.059: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:189

Failed: [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:51:11.130: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #36554

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:488
Apr 19 11:23:23.234: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:189

Failed: [k8s.io] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:37:09.479: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:38:04.922: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #32646

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:198
Expected
    <*errors.errorString | 0xc4203c1290>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:176

Issues about this test specifically: #31873

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-0s5p2" to be ready
Expected error:
    <*errors.errorString | 0xc4203cf9b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32644

Failed: [k8s.io] Projected should provide container's cpu limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:46:22.481: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:44:09.134: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:36:43.678: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:33:34.727: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] Generated release_1_5 clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:36:56.861: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:48:54.197: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:33:42.338: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:36:28.136: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:26:39.881: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:37:38.808: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:38:05.314: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Secrets optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:48:20.993: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:43:10.213: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:44:35.070: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:34:48.433: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:40:35.563: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #36109

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:48:42.536: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #32639

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:41:33.008: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:36:37.153: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #37274

Failed: [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:46:40.258: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] ReplicaSet should release no longer matching pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:45:40.263: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:51:08.647: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:45:27.204: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #36948

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:50:01.825: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:27:05.145: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:39:15.362: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:33:31.867: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:43:48.539: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:26:51.824: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:44:01.018: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:39:53.298: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:39:51.804: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:32:56.977: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:48:03.968: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:30:04.149: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:30:18.282: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:46:48.650: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #29831

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:42:44.721: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:34:51.713: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:30:15.721: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #35297

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:37:04.991: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:31:23.523: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #31408

Failed: [k8s.io] Projected should provide container's memory limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:50:30.921: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:27:05.129: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:53:29.279: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:33:28.928: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:29:56.432: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:40:22.616: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #35422

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:39:48.237: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:43:51.245: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:43:09.583: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:44:29.591: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #38511

Failed: [k8s.io] Projected should project all components that make up the projection API [Conformance] [Volume] [Projection] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:30:30.350: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:46:04.211: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:36:58.089: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:46:22.491: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #36649

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203e5880>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:527

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:85
Expected error:
    <*errors.errorString | 0xc4204132c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:46:46.441: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:37:12.915: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #32584

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:30:03.274: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #32023

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:31:07.862: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:53:07.612: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:33:40.551: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241 #43425

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:44:17.507: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] Projected should set mode on item file [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:48:53.373: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:36:34.060: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #38556

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:33:47.335: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:46:23.779: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28067 #28378 #32692 #33256 #34654

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:43:06.902: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:42:28.420: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:49:30.428: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:28:03.566: Couldn't delete ns: "e2e-tests-events-1bdq5": namespace e2e-tests-events-1bdq5 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-events-1bdq5 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:277

Issues about this test specifically: #28346

Failed: [k8s.io] Projected should update annotations on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:43:24.464: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:40:48.141: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:39:56.617: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42043ce80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #32375

Failed: [k8s.io] ServiceAccounts should allow opting out of API token automount [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:26:35.941: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:41:54.079: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:52:46.372: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:37:43.257: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:40:03.971: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:29:56.799: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:41:18.216: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:43:29.868: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #29050

Failed: [k8s.io] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:39:12.720: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:26:51.102: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:249
Apr 19 11:32:43.639: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:304

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:50:12.881: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 19 11:36:53.483: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-63881bea-jcwn"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #38308

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7662/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc420fc2820>: {
        s: "pod \"pvc-tester-tc8jg\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-tc8jg" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc420e9dc90>: {
        s: "pod \"pvc-tester-80qvk\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-80qvk" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc420f85a00>: {
        s: "pod \"pvc-tester-sx3rb\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-sx3rb" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7663/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc421720f30>: {
        s: "pod \"pvc-tester-h7fxk\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-h7fxk" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc42185cb80>: {
        s: "pod \"pvc-tester-wfn2x\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-wfn2x" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc420e7cc10>: {
        s: "pod \"pvc-tester-3jhk9\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-3jhk9" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7664/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc421362760>: {
        s: "pod \"pvc-tester-wnq2d\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-wnq2d" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:105
Expected error:
    <*errors.errorString | 0xc4215f9a80>: {
        s: "pod \"pvc-tester-8gz19\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-8gz19" is not Running: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:51

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:92
Expected error:
    <*errors.errorString | 0xc42154caf0>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:422

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7765/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc42101b4c0>: {
        s: "failed to get logs from pod-e40d907b-2693-11e7-9ba8-0242ac110009 for test-container: unknown (get pods pod-e40d907b-2693-11e7-9ba8-0242ac110009)",
    }
    failed to get logs from pod-e40d907b-2693-11e7-9ba8-0242ac110009 for test-container: unknown (get pods pod-e40d907b-2693-11e7-9ba8-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Apr 21 06:13:12.945: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.3.38:8080/dial?request=hostName&protocol=http&host=10.72.3.24&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:217

Issues about this test specifically: #32375

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:418
Expected error:
    <*errors.errorString | 0xc420ba7030>: {
        s: "failed to get logs from pod-projected-configmaps-7c73715e-2693-11e7-8b57-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-7c73715e-2693-11e7-8b57-0242ac110009)",
    }
    failed to get logs from pod-projected-configmaps-7c73715e-2693-11e7-8b57-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-7c73715e-2693-11e7-8b57-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Apr 21 06:13:09.238: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.72.4.21 8081
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:270

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:181
Expected error:
    <*errors.errorString | 0xc420406310>: {
        s: "failed to get logs from downwardapi-volume-f8d77ba2-2693-11e7-9ba8-0242ac110009 for client-container: unknown (get pods downwardapi-volume-f8d77ba2-2693-11e7-9ba8-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-f8d77ba2-2693-11e7-9ba8-0242ac110009 for client-container: unknown (get pods downwardapi-volume-f8d77ba2-2693-11e7-9ba8-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:172
Expected error:
    <*errors.errorString | 0xc4211c7080>: {
        s: "failed to get logs from downwardapi-volume-ad28da83-2693-11e7-8621-0242ac110009 for client-container: unknown (get pods downwardapi-volume-ad28da83-2693-11e7-8621-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-ad28da83-2693-11e7-8621-0242ac110009 for client-container: unknown (get pods downwardapi-volume-ad28da83-2693-11e7-8621-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113
Expected error:
    <*errors.errorString | 0xc4213d01f0>: {
        s: "failed to get logs from pod-063ce90d-2694-11e7-8b57-0242ac110009 for test-container: unknown (get pods pod-063ce90d-2694-11e7-8b57-0242ac110009)",
    }
    failed to get logs from pod-063ce90d-2694-11e7-8b57-0242ac110009 for test-container: unknown (get pods pod-063ce90d-2694-11e7-8b57-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1401
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.119.144 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-qlxk6 run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc4216a5c00  If you don't see a command prompt, try pressing enter.\nError attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\nError from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log e2e-test-rm-busybox-job-4380f)\n [] <nil> 0xc4213d56b0 exit status 1 <nil> <nil> true [0xc4206e6a20 0xc4206e6a48 0xc4206e6a58] [0xc4206e6a20 0xc4206e6a48 0xc4206e6a58] [0xc4206e6a28 0xc4206e6a40 0xc4206e6a50] [0x127b510 0x127b610 0x127b610] 0xc421211080 <nil>}:\nCommand stdout:\n\nstderr:\nIf you don't see a command prompt, try pressing enter.\nError attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\nError from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log e2e-test-rm-busybox-job-4380f)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.119.144 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-qlxk6 run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc4216a5c00  If you don't see a command prompt, try pressing enter.
    Error attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
    Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log e2e-test-rm-busybox-job-4380f)
     [] <nil> 0xc4213d56b0 exit status 1 <nil> <nil> true [0xc4206e6a20 0xc4206e6a48 0xc4206e6a58] [0xc4206e6a20 0xc4206e6a48 0xc4206e6a58] [0xc4206e6a28 0xc4206e6a40 0xc4206e6a50] [0x127b510 0x127b610 0x127b610] 0xc421211080 <nil>}:
    Command stdout:
    
    stderr:
    If you don't see a command prompt, try pressing enter.
    Error attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
    Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log e2e-test-rm-busybox-job-4380f)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2120

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|GCEPD: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Projected should be consumable in multiple volumes in a pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:171
Expected error:
    <*errors.errorString | 0xc42112ce80>: {
        s: "failed to get logs from pod-projected-secrets-ec8f6303-2693-11e7-ae7c-0242ac110009 for secret-volume-test: unknown (get pods pod-projected-secrets-ec8f6303-2693-11e7-ae7c-0242ac110009)",
    }
    failed to get logs from pod-projected-secrets-ec8f6303-2693-11e7-ae7c-0242ac110009 for secret-volume-test: unknown (get pods pod-projected-secrets-ec8f6303-2693-11e7-ae7c-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:339
Timed out after 300.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
	<*errors.StatusError>: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"unknown (get pods pod-configmaps-7cf6d9a6-2693-11e7-94ac-0242ac110009)", Reason:"Forbidden", Details:(*v1.StatusDetails)(0xc420b48d20), Code:403}}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:306

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:163
Expected error:
    <*errors.errorString | 0xc420806b90>: {
        s: "failed to get logs from downwardapi-volume-f6693005-2693-11e7-ae7c-0242ac110009 for client-container: unknown (get pods downwardapi-volume-f6693005-2693-11e7-ae7c-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-f6693005-2693-11e7-ae7c-0242ac110009 for client-container: unknown (get pods downwardapi-volume-f6693005-2693-11e7-ae7c-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:284
Apr 21 06:17:49.911: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:304

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:710
Timed out after 300.001s.
Expected
    <string>: Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:707

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:197
Expected error:
    <*errors.errorString | 0xc420c44e00>: {
        s: "failed to get logs from downwardapi-volume-7bf3c662-2693-11e7-b921-0242ac110009 for client-container: unknown (get pods downwardapi-volume-7bf3c662-2693-11e7-b921-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-7bf3c662-2693-11e7-b921-0242ac110009 for client-container: unknown (get pods downwardapi-volume-7bf3c662-2693-11e7-b921-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:493
Apr 21 06:08:10.179: Failed to open websocket to wss://35.188.119.144:443/api/v1/namespaces/e2e-tests-port-forwarding-kw14p/pods/pfpod/portforward?ports=80: websocket.Dial wss://35.188.119.144:443/api/v1/namespaces/e2e-tests-port-forwarding-kw14p/pods/pfpod/portforward?ports=80: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:407

Issues about this test specifically: #40977

Failed: [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:401
Expected error:
    <*errors.errorString | 0xc420ae25a0>: {
        s: "failed to get logs from pod-projected-configmaps-7bf78840-2693-11e7-b938-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-7bf78840-2693-11e7-b938-0242ac110009)",
    }
    failed to get logs from pod-projected-configmaps-7bf78840-2693-11e7-b938-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-7bf78840-2693-11e7-b938-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:509
Timed out after 300.001s.
Expected
    <string>: content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    
to contain substring
    <string>: value-2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:508

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:190
Expected error:
    <*errors.errorString | 0xc42113c250>: {
        s: "failed to get logs from downwardapi-volume-7be30644-2693-11e7-aad6-0242ac110009 for client-container: unknown (get pods downwardapi-volume-7be30644-2693-11e7-aad6-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-7be30644-2693-11e7-aad6-0242ac110009 for client-container: unknown (get pods downwardapi-volume-7be30644-2693-11e7-aad6-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Projected should be consumable in multiple volumes in the same pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:795
Expected error:
    <*errors.errorString | 0xc4207b6e80>: {
        s: "failed to get logs from pod-projected-configmaps-f3e45369-2693-11e7-9ba8-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-f3e45369-2693-11e7-9ba8-0242ac110009)",
    }
    failed to get logs from pod-projected-configmaps-f3e45369-2693-11e7-9ba8-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-f3e45369-2693-11e7-9ba8-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:130
Apr 21 06:17:34.187: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:304

Failed: [k8s.io] PrivilegedPod should enable privileged commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:56
cmd [ip link add dummy1 type dummy], stdout "", stderr ""
Expected error:
    <*errors.errorString | 0xc420e89df0>: {
        s: "unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)",
    }
    unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:68

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:67
Expected error:
    <*errors.StatusError | 0xc42084e400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=log)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=log)",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=log)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #35422

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:69
Expected error:
    <*errors.errorString | 0xc420b8fef0>: {
        s: "failed to get logs from downwardapi-volume-eef925f6-2693-11e7-985f-0242ac110009 for client-container: unknown (get pods downwardapi-volume-eef925f6-2693-11e7-985f-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-eef925f6-2693-11e7-985f-0242ac110009 for client-container: unknown (get pods downwardapi-volume-eef925f6-2693-11e7-985f-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:409
Expected error:
    <*errors.errorString | 0xc420df84a0>: {
        s: "failed to get logs from pod-projected-configmaps-8c12480f-2693-11e7-aad6-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-8c12480f-2693-11e7-aad6-0242ac110009)",
    }
    failed to get logs from pod-projected-configmaps-8c12480f-2693-11e7-aad6-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-8c12480f-2693-11e7-aad6-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:55
Expected error:
    <*errors.errorString | 0xc420d01f90>: {
        s: "failed to get logs from client-containers-7c593bcc-2693-11e7-9910-0242ac110009 for test-container: unknown (get pods client-containers-7c593bcc-2693-11e7-9910-0242ac110009)",
    }
    failed to get logs from client-containers-7c593bcc-2693-11e7-9910-0242ac110009 for test-container: unknown (get pods client-containers-7c593bcc-2693-11e7-9910-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Issues about this test specifically: #29994

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:61
Expected error:
    <*errors.errorString | 0xc42180d360>: {
        s: "failed to get logs from pod-projected-secrets-dedbe409-2693-11e7-9ba8-0242ac110009 for projected-secret-volume-test: unknown (get pods pod-projected-secrets-dedbe409-2693-11e7-9ba8-0242ac110009)",
    }
    failed to get logs from pod-projected-secrets-dedbe409-2693-11e7-9ba8-0242ac110009 for projected-secret-volume-test: unknown (get pods pod-projected-secrets-dedbe409-2693-11e7-9ba8-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Apr 21 06:10:12.659: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=stats) Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=stats) Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=stats) Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=stats) Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=stats) Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=stats) Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=stats) Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=stats) Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=stats)]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:515
Apr 21 06:08:49.493: Failed to open websocket to wss://35.188.119.144:443/api/v1/namespaces/e2e-tests-port-forwarding-pp646/pods/pfpod/portforward?ports=80: websocket.Dial wss://35.188.119.144:443/api/v1/namespaces/e2e-tests-port-forwarding-pp646/pods/pfpod/portforward?ports=80: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:407

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:204
Expected error:
    <*errors.errorString | 0xc4218cec70>: {
        s: "failed to get logs from downwardapi-volume-ec0c003c-2693-11e7-8b57-0242ac110009 for client-container: unknown (get pods downwardapi-volume-ec0c003c-2693-11e7-8b57-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-ec0c003c-2693-11e7-8b57-0242ac110009 for client-container: unknown (get pods downwardapi-volume-ec0c003c-2693-11e7-8b57-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:101
Expected error:
    <*errors.errorString | 0xc421975830>: {
        s: "failed to get logs from var-expansion-d98a49b2-2693-11e7-af23-0242ac110009 for dapi-container: unknown (get pods var-expansion-d98a49b2-2693-11e7-af23-0242ac110009)",
    }
    failed to get logs from var-expansion-d98a49b2-2693-11e7-af23-0242ac110009 for dapi-container: unknown (get pods var-expansion-d98a49b2-2693-11e7-af23-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:59
Expected error:
    <*errors.errorString | 0xc42132c1f0>: {
        s: "failed to get logs from downwardapi-volume-888fe1b3-2693-11e7-af23-0242ac110009 for client-container: unknown (get pods downwardapi-volume-888fe1b3-2693-11e7-af23-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-888fe1b3-2693-11e7-af23-0242ac110009 for client-container: unknown (get pods downwardapi-volume-888fe1b3-2693-11e7-af23-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Projected should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:888
Timed out after 120.001s.
Expected
    <string>: content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    
to contain substring
    <string>: key3="value3"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:887

Failed: [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:40
Expected error:
    <*errors.errorString | 0xc421082e10>: {
        s: "failed to get logs from pod-projected-secrets-e11264f3-2693-11e7-af23-0242ac110009 for projected-secret-volume-test: unknown (get pods pod-projected-secrets-e11264f3-2693-11e7-af23-0242ac110009)",
    }
    failed to get logs from pod-projected-secrets-e11264f3-2693-11e7-af23-0242ac110009 for projected-secret-volume-test: unknown (get pods pod-projected-secrets-e11264f3-2693-11e7-af23-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:581
Apr 21 06:08:36.882: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:189

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93
Expected error:
    <*errors.errorString | 0xc420d67510>: {
        s: "failed to get logs from pod-a6ad11bf-2693-11e7-89a8-0242ac110009 for test-container: unknown (get pods pod-a6ad11bf-2693-11e7-89a8-0242ac110009)",
    }
    failed to get logs from pod-a6ad11bf-2693-11e7-89a8-0242ac110009 for test-container: unknown (get pods pod-a6ad11bf-2693-11e7-89a8-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:409
Expected error:
    <*errors.errorString | 0xc420450660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:387

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] ConfigMap should be consumable via the environment [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:422
Expected error:
    <*errors.errorString | 0xc4201a21c0>: {
        s: "failed to get logs from pod-configmaps-8125600c-2693-11e7-aad6-0242ac110009 for env-test: unknown (get pods pod-configmaps-8125600c-2693-11e7-aad6-0242ac110009)",
    }
    failed to get logs from pod-configmaps-8125600c-2693-11e7-aad6-0242ac110009 for env-test: unknown (get pods pod-configmaps-8125600c-2693-11e7-aad6-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:564
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.119.144 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-4xl2s run run-test --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc420e3fa20  If you don't see a command prompt, try pressing enter.\nError attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\nError from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log run-test-sq3kj)\n [] <nil> 0xc42085e360 exit status 1 <nil> <nil> true [0xc421276878 0xc4212768a0 0xc4212768b0] [0xc421276878 0xc4212768a0 0xc4212768b0] [0xc421276880 0xc421276898 0xc4212768a8] [0x127b510 0x127b610 0x127b610] 0xc42075ed80 <nil>}:\nCommand stdout:\n\nstderr:\nIf you don't see a command prompt, try pressing enter.\nError attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\nError from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log run-test-sq3kj)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.119.144 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-4xl2s run run-test --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc420e3fa20  If you don't see a command prompt, try pressing enter.
    Error attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
    Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log run-test-sq3kj)
     [] <nil> 0xc42085e360 exit status 1 <nil> <nil> true [0xc421276878 0xc4212768a0 0xc4212768b0] [0xc421276878 0xc4212768a0 0xc4212768b0] [0xc421276880 0xc421276898 0xc4212768a8] [0x127b510 0x127b610 0x127b610] 0xc42075ed80 <nil>}:
    Command stdout:
    
    stderr:
    If you don't see a command prompt, try pressing enter.
    Error attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
    Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log run-test-sq3kj)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2120

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:270
Expected error:
    <*errors.errorString | 0xc420217a80>: {
        s: "internal: claims map is missing pvc \"e2e-tests-pv-6dw0b/datadir-ss-0\"",
    }
    internal: claims map is missing pvc "e2e-tests-pv-6dw0b/datadir-ss-0"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:268

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:53
Expected error:
    <*errors.errorString | 0xc4201dea30>: {
        s: "failed to get logs from pod-configmaps-e613ffa8-2693-11e7-af23-0242ac110009 for configmap-volume-test: unknown (get pods pod-configmaps-e613ffa8-2693-11e7-af23-0242ac110009)",
    }
    failed to get logs from pod-configmaps-e613ffa8-2693-11e7-af23-0242ac110009 for configmap-volume-test: unknown (get pods pod-configmaps-e613ffa8-2693-11e7-af23-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:423
Apr 21 06:20:31.223: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:304

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:190
Expected error:
    <*errors.errorString | 0xc421263620>: {
        s: "PVC \"pvc-zzvpz\" did not become Bound: PersistentVolumeClaim pvc-zzvpz not in phase Bound within 5m0s",
    }
    PVC "pvc-zzvpz" did not become Bound: PersistentVolumeClaim pvc-zzvpz not in phase Bound within 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:39

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:51
Expected error:
    <*errors.errorString | 0xc420ba2ba0>: {
        s: "failed to get logs from pod-secrets-9170784e-2693-11e7-b997-0242ac110009 for secret-volume-test: unknown (get pods pod-secrets-9170784e-2693-11e7-b997-0242ac110009)",
    }
    failed to get logs from pod-secrets-9170784e-2693-11e7-b997-0242ac110009 for secret-volume-test: unknown (get pods pod-secrets-9170784e-2693-11e7-b997-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Downward API should provide pod and host IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:94
Expected error:
    <*errors.errorString | 0xc421130120>: {
        s: "failed to get logs from downward-api-96e197b7-2693-11e7-b306-0242ac110009 for dapi-container: unknown (get pods downward-api-96e197b7-2693-11e7-b306-0242ac110009)",
    }
    failed to get logs from downward-api-96e197b7-2693-11e7-b306-0242ac110009 for dapi-container: unknown (get pods downward-api-96e197b7-2693-11e7-b306-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:117
Expected error:
    <*errors.errorString | 0xc4202a5070>: {
        s: "failed to get logs from pod-0b26cff1-2694-11e7-8b57-0242ac110009 for test-container: unknown (get pods pod-0b26cff1-2694-11e7-8b57-0242ac110009)",
    }
    failed to get logs from pod-0b26cff1-2694-11e7-8b57-0242ac110009 for test-container: unknown (get pods pod-0b26cff1-2694-11e7-8b57-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Secrets should be consumable via the environment [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:425
Expected error:
    <*errors.errorString | 0xc4202a4fa0>: {
        s: "failed to get logs from pod-configmaps-db50a611-2693-11e7-b997-0242ac110009 for env-test: unknown (get pods pod-configmaps-db50a611-2693-11e7-b997-0242ac110009)",
    }
    failed to get logs from pod-configmaps-db50a611-2693-11e7-b997-0242ac110009 for env-test: unknown (get pods pod-configmaps-db50a611-2693-11e7-b997-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:243
Expected error:
    <*errors.errorString | 0xc42032f120>: {
        s: "failed to get logs from pod-service-account-9cd2847b-2693-11e7-8b57-0242ac110009-85l7q for token-test: unknown (get pods pod-service-account-9cd2847b-2693-11e7-8b57-0242ac110009-85l7q)",
    }
    failed to get logs from pod-service-account-9cd2847b-2693-11e7-8b57-0242ac110009-85l7q for token-test: unknown (get pods pod-service-account-9cd2847b-2693-11e7-8b57-0242ac110009-85l7q)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Issues about this test specifically: #37526

Failed: [k8s.io] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:392
Expected error:
    <*errors.errorString | 0xc4212ee810>: {
        s: "failed to get logs from pod-projected-configmaps-8a2a8aff-2693-11e7-b306-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-8a2a8aff-2693-11e7-b306-0242ac110009)",
    }
    failed to get logs from pod-projected-configmaps-8a2a8aff-2693-11e7-b306-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-8a2a8aff-2693-11e7-b306-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:479
Apr 21 06:12:28.675: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:189

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:154
Expected error:
    <*errors.errorString | 0xc4203fb760>: {
        s: "failed to get logs from pod-secrets-37eb5239-2694-11e7-94ac-0242ac110009 for secret-volume-test: unknown (get pods pod-secrets-37eb5239-2694-11e7-94ac-0242ac110009)",
    }
    failed to get logs from pod-secrets-37eb5239-2694-11e7-94ac-0242ac110009 for secret-volume-test: unknown (get pods pod-secrets-37eb5239-2694-11e7-94ac-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Volumes [Volume] [k8s.io] NFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:134
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.119.144 --kubeconfig=/workspace/.kube/config exec nfs-client --namespace=e2e-tests-volume-2grrk -- cat /opt/0/index.html] []  <nil>  error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\n [] <nil> 0xc420cfa2d0 exit status 1 <nil> <nil> true [0xc420449410 0xc420449448 0xc420449488] [0xc420449410 0xc420449448 0xc420449488] [0xc420449430 0xc420449478] [0x127b610 0x127b610] 0xc421a31200 <nil>}:\nCommand stdout:\n\nstderr:\nerror: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.119.144 --kubeconfig=/workspace/.kube/config exec nfs-client --namespace=e2e-tests-volume-2grrk -- cat /opt/0/index.html] []  <nil>  error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
     [] <nil> 0xc420cfa2d0 exit status 1 <nil> <nil> true [0xc420449410 0xc420449448 0xc420449488] [0xc420449410 0xc420449448 0xc420449488] [0xc420449430 0xc420449478] [0x127b610 0x127b610] 0xc421a31200 <nil>}:
    Command stdout:
    
    stderr:
    error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2120

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:66
Expected error:
    <*errors.errorString | 0xc420f294b0>: {
        s: "failed to get logs from pod-configmaps-c2b3c79a-2693-11e7-b68a-0242ac110009 for configmap-volume-test: unknown (get pods pod-configmaps-c2b3c79a-2693-11e7-b68a-0242ac110009)",
    }
    failed to get logs from pod-configmaps-c2b3c79a-2693-11e7-b68a-0242ac110009 for configmap-volume-test: unknown (get pods pod-configmaps-c2b3c79a-2693-11e7-b68a-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Projected should set mode on item file [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:832
Expected error:
    <*errors.errorString | 0xc420d1e630>: {
        s: "failed to get logs from downwardapi-volume-cb2cb1f7-2693-11e7-af23-0242ac110009 for client-container: unknown (get pods downwardapi-volume-cb2cb1f7-2693-11e7-af23-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-cb2cb1f7-2693-11e7-af23-0242ac110009 for client-container: unknown (get pods downwardapi-volume-cb2cb1f7-2693-11e7-af23-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] HostPath should support r/w [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc420ae7860>: {
        s: "failed to get logs from pod-host-path-test for test-container-2: unknown (get pods pod-host-path-test)",
    }
    failed to get logs from pod-host-path-test for test-container-2: unknown (get pods pod-host-path-test)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] Projected should provide container's cpu limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:926
Expected error:
    <*errors.errorString | 0xc420d0d7d0>: {
        s: "failed to get logs from downwardapi-volume-964e907f-2693-11e7-b997-0242ac110009 for client-container: unknown (get pods downwardapi-volume-964e907f-2693-11e7-b997-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-964e907f-2693-11e7-b997-0242ac110009 for client-container: unknown (get pods downwardapi-volume-964e907f-2693-11e7-b997-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2220

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:214
Apr 21 06:20:23.685: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7780/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:204
Expected error:
    <*errors.errorString | 0xc42156dc80>: {
        s: "failed to get logs from downwardapi-volume-d9f4012a-26c9-11e7-932b-0242ac110005 for client-container: unknown (get pods downwardapi-volume-d9f4012a-26c9-11e7-932b-0242ac110005)",
    }
    failed to get logs from downwardapi-volume-d9f4012a-26c9-11e7-932b-0242ac110005 for client-container: unknown (get pods downwardapi-volume-d9f4012a-26c9-11e7-932b-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster-logging/sd.go:63
Fluentd deployed incorrectly
Expected error:
    <*errors.errorString | 0xc420e92120>: {
        s: "node gke-bootstrap-e2e-default-pool-1509ddcf-4mnx doesn't have fluentd instance",
    }
    node gke-bootstrap-e2e-default-pool-1509ddcf-4mnx doesn't have fluentd instance
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster-logging/sd.go:46

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241 #43425

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:156
Timed out after 300.001s.
Expected
    <string>: content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    
to contain substring
    <string>: value-2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:155

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:55
Expected error:
    <*errors.errorString | 0xc42142d4b0>: {
        s: "failed to get logs from client-containers-e7da9b9e-26c9-11e7-bf5b-0242ac110005 for test-container: unknown (get pods client-containers-e7da9b9e-26c9-11e7-bf5b-0242ac110005)",
    }
    failed to get logs from client-containers-e7da9b9e-26c9-11e7-bf5b-0242ac110005 for test-container: unknown (get pods client-containers-e7da9b9e-26c9-11e7-bf5b-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Issues about this test specifically: #29994

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc420fcfa10>: {
        s: "failed to get logs from pod-d506c495-26c9-11e7-932b-0242ac110005 for test-container: unknown (get pods pod-d506c495-26c9-11e7-932b-0242ac110005)",
    }
    failed to get logs from pod-d506c495-26c9-11e7-932b-0242ac110005 for test-container: unknown (get pods pod-d506c495-26c9-11e7-932b-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1035
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.99.157 --kubeconfig=/workspace/.kube/config logs redis-master-v41ln redis-master --namespace=e2e-tests-kubectl-c67cc] []  <nil>  Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log redis-master-v41ln)\n [] <nil> 0xc421034840 exit status 1 <nil> <nil> true [0xc4207103c8 0xc4207103e0 0xc4207103f8] [0xc4207103c8 0xc4207103e0 0xc4207103f8] [0xc4207103d8 0xc4207103f0] [0x127b570 0x127b570] 0xc420cea780 <nil>}:\nCommand stdout:\n\nstderr:\nError from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log redis-master-v41ln)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.99.157 --kubeconfig=/workspace/.kube/config logs redis-master-v41ln redis-master --namespace=e2e-tests-kubectl-c67cc] []  <nil>  Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log redis-master-v41ln)
     [] <nil> 0xc421034840 exit status 1 <nil> <nil> true [0xc4207103c8 0xc4207103e0 0xc4207103f8] [0xc4207103c8 0xc4207103e0 0xc4207103f8] [0xc4207103d8 0xc4207103f0] [0x127b570 0x127b570] 0xc420cea780 <nil>}:
    Command stdout:
    
    stderr:
    Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log redis-master-v41ln)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2119

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|GCEPD: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
Expected error:
    <*errors.StatusError | 0xc4213edc00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=log)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=log)",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=log)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #36242

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc4207852e0>: {
        s: "failed to get logs from pod-c5b7baef-26c9-11e7-94f6-0242ac110005 for test-container: unknown (get pods pod-c5b7baef-26c9-11e7-94f6-0242ac110005)",
    }
    failed to get logs from pod-c5b7baef-26c9-11e7-94f6-0242ac110005 for test-container: unknown (get pods pod-c5b7baef-26c9-11e7-94f6-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:512
Apr 21 12:37:27.836: Failed to open websocket to wss://35.188.99.157:443/api/v1/namespaces/e2e-tests-pods-b1zgt/pods/pod-exec-websocket-f2321e8e-26c9-11e7-bf5b-0242ac110005/exec?command=cat&command=%2Fetc%2Fresolv.conf&container=main&stderr=1&stdout=1: websocket.Dial wss://35.188.99.157:443/api/v1/namespaces/e2e-tests-pods-b1zgt/pods/pod-exec-websocket-f2321e8e-26c9-11e7-bf5b-0242ac110005/exec?command=cat&command=%2Fetc%2Fresolv.conf&container=main&stderr=1&stdout=1: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:482

Issues about this test specifically: #38308

Failed: [k8s.io] Secrets should be consumable via the environment [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:425
Expected error:
    <*errors.errorString | 0xc420bba370>: {
        s: "failed to get logs from pod-configmaps-ec5e6017-26c9-11e7-aeff-0242ac110005 for env-test: unknown (get pods pod-configmaps-ec5e6017-26c9-11e7-aeff-0242ac110005)",
    }
    failed to get logs from pod-configmaps-ec5e6017-26c9-11e7-aeff-0242ac110005 for env-test: unknown (get pods pod-configmaps-ec5e6017-26c9-11e7-aeff-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: [k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:967
Expected error:
    <*errors.errorString | 0xc421053f50>: {
        s: "failed to get logs from downwardapi-volume-c5cbdff7-26c9-11e7-ba50-0242ac110005 for client-container: unknown (get pods downwardapi-volume-c5cbdff7-26c9-11e7-ba50-0242ac110005)",
    }
    failed to get logs from downwardapi-volume-c5cbdff7-26c9-11e7-ba50-0242ac110005 for client-container: unknown (get pods downwardapi-volume-c5cbdff7-26c9-11e7-ba50-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85
Expected error:
    <*errors.errorString | 0xc420d4f7d0>: {
        s: "failed to get logs from pod-c608bb72-26c9-11e7-92c3-0242ac110005 for test-container: unknown (get pods pod-c608bb72-26c9-11e7-92c3-0242ac110005)",
    }
    failed to get logs from pod-c608bb72-26c9-11e7-92c3-0242ac110005 for test-container: unknown (get pods pod-c608bb72-26c9-11e7-92c3-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:569
Apr 21 12:36:27.225: Failed to open websocket to wss://35.188.99.157:443/api/v1/namespaces/e2e-tests-pods-n7wb4/pods/pod-logs-websocket-ce59bd4a-26c9-11e7-ba50-0242ac110005/log?container=main: websocket.Dial wss://35.188.99.157:443/api/v1/namespaces/e2e-tests-pods-n7wb4/pods/pod-logs-websocket-ce59bd4a-26c9-11e7-ba50-0242ac110005/log?container=main: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:549

Issues about this test specifically: #30263

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:53
Expected error:
    <*errors.errorString | 0xc4202839d0>: {
        s: "failed to get logs from pod-configmaps-df959ffc-26c9-11e7-bf5b-0242ac110005 for configmap-volume-test: unknown (get pods pod-configmaps-df959ffc-26c9-11e7-bf5b-0242ac110005)",
    }
    failed to get logs from pod-configmaps-df959ffc-26c9-11e7-bf5b-0242ac110005 for configmap-volume-test: unknown (get pods pod-configmaps-df959ffc-26c9-11e7-bf5b-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:62
Expected error:
    <*errors.StatusError | 0xc420267300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=log)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=log)",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=log)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #32936

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:510
Apr 21 12:37:21.054: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:189

Failed: [k8s.io] Projected should provide container's memory request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:953
Expected error:
    <*errors.errorString | 0xc421175210>: {
        s: "failed to get logs from downwardapi-volume-c615d3a7-26c9-11e7-932b-0242ac110005 for client-container: unknown (get pods downwardapi-volume-c615d3a7-26c9-11e7-932b-0242ac110005)",
    }
    failed to get logs from downwardapi-volume-c615d3a7-26c9-11e7-932b-0242ac110005 for client-container: unknown (get pods downwardapi-volume-c615d3a7-26c9-11e7-932b-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:710
Timed out after 300.001s.
Expected
    <string>: Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:707

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:510
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.99.157 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-1wp4b exec nginx -- /bin/sh -c exit 0] []  <nil>  error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\n [] <nil> 0xc421434de0 exit status 1 <nil> <nil> true [0xc4213166a8 0xc4213166c0 0xc4213166d8] [0xc4213166a8 0xc4213166c0 0xc4213166d8] [0xc4213166b8 0xc4213166d0] [0x127b570 0x127b570] 0xc421115aa0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.99.157 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-1wp4b exec nginx -- /bin/sh -c exit 0] []  <nil>  error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
     [] <nil> 0xc421434de0 exit status 1 <nil> <nil> true [0xc4213166a8 0xc4213166c0 0xc4213166d8] [0xc4213166a8 0xc4213166c0 0xc4213166d8] [0xc4213166b8 0xc4213166d0] [0x127b570 0x127b570] 0xc421115aa0 <nil>}:
    Command stdout:
    
    stderr:
    error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:474

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1401
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.99.157 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-drgmf run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc420a529e0  If you don't see a command prompt, try pressing enter.\nError attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\nError from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log e2e-test-rm-busybox-job-6kcbx)\n [] <nil> 0xc420ceab70 exit status 1 <nil> <nil> true [0xc420037a70 0xc420037ab8 0xc420037ad8] [0xc420037a70 0xc420037ab8 0xc420037ad8] [0xc420037a90 0xc420037ab0 0xc420037ac8] [0x127b470 0x127b570 0x127b570] 0xc421284720 <nil>}:\nCommand stdout:\n\nstderr:\nIf you don't see a command prompt, try pressing enter.\nError attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\nError from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log e2e-test-rm-busybox-job-6kcbx)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.99.157 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-drgmf run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc420a529e0  If you don't see a command prompt, try pressing enter.
    Error attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
    Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log e2e-test-rm-busybox-job-6kcbx)
     [] <nil> 0xc420ceab70 exit status 1 <nil> <nil> true [0xc420037a70 0xc420037ab8 0xc420037ad8] [0xc420037a70 0xc420037ab8 0xc420037ad8] [0xc420037a90 0xc420037ab0 0xc420037ac8] [0x127b470 0x127b570 0x127b570] 0xc421284720 <nil>}:
    Command stdout:
    
    stderr:
    If you don't see a command prompt, try pressing enter.
    Error attaching, falling back to logs: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
    Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log e2e-test-rm-busybox-job-6kcbx)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2119

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:382
Expected error:
    <*errors.errorString | 0xc420e3ebf0>: {
        s: "failed to get logs from pod-configmaps-e6859ef3-26c9-11e7-8017-0242ac110005 for env-test: unknown (get pods pod-configmaps-e6859ef3-26c9-11e7-8017-0242ac110005)",
    }
    failed to get logs from pod-configmaps-e6859ef3-26c9-11e7-8017-0242ac110005 for env-test: unknown (get pods pod-configmaps-e6859ef3-26c9-11e7-8017-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Issues about this test specifically: #27079

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:35
Expected error:
    <*errors.errorString | 0xc420c6b700>: {
        s: "failed to get logs from client-containers-e17e61f8-26c9-11e7-9b44-0242ac110005 for test-container: unknown (get pods client-containers-e17e61f8-26c9-11e7-9b44-0242ac110005)",
    }
    failed to get logs from client-containers-e17e61f8-26c9-11e7-9b44-0242ac110005 for test-container: unknown (get pods client-containers-e17e61f8-26c9-11e7-9b44-0242ac110005)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Issues about this test specifically: #34520

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7856/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Apr 23 00:31:25.901: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.72.8.52:8080/hostName
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:270

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] HostPath should support r/w [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:55
failed to execute command in pod test-host-network-pod, container busybox-2: error dialing backend: ssh: unexpected packet in response to channel open: <nil>
Expected error:
    <*errors.StatusError | 0xc420383a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "error dialing backend: ssh: unexpected packet in response to channel open: <nil>",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    error dialing backend: ssh: unexpected packet in response to channel open: <nil>
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:107

Issues about this test specifically: #37502

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|GCEPD: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:423
Apr 23 00:34:04.214: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:304

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
Expected error:
    <*errors.errorString | 0xc420459ed0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume_util.go:178

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/7946/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 24 12:27:55.249: Couldn't delete ns: "e2e-tests-pv-17z65": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pv-17z65\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (delete namespaces e2e-tests-pv-17z65) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pv-17z65\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (delete namespaces e2e-tests-pv-17z65)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc42142f680), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: DumpClusterLogs {e2e.go}

error during ./cluster/log-dump.sh /workspace/_artifacts: exit status 1

Issues about this test specifically: #33722 #37578 #38206 #40455 #42934 #43155 #44504

Failed: DiffResources {e2e.go}

Error: 35 leaked resources
[ instance-templates ]
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-24a749f9  n1-standard-2               2017-04-24T12:11:33.593-07:00
+gke-bootstrap-e2e-default-pool-a5c690bd  n1-standard-2               2017-04-24T12:11:33.951-07:00
+gke-bootstrap-e2e-default-pool-cd67b7bb  n1-standard-2               2017-04-24T12:11:35.572-07:00
[ instance-groups ]
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-a5c690bd-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
[ instances ]
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+gke-bootstrap-e2e-default-pool-a5c690bd-8vvx  us-central1-f  n1-standard-2               10.128.0.8   35.188.57.104    RUNNING
+gke-bootstrap-e2e-default-pool-a5c690bd-mkb1  us-central1-f  n1-standard-2               10.128.0.10  130.211.141.137  RUNNING
+gke-bootstrap-e2e-default-pool-a5c690bd-njg8  us-central1-f  n1-standard-2               10.128.0.9   35.188.89.99     RUNNING
[ disks ]
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-a5c690bd-8vvx  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-a5c690bd-mkb1  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-a5c690bd-njg8  us-central1-f  100      pd-standard  READY
[ routes ]
+default-route-05140c3dbac4acd0                                   bootstrap-e2e  10.142.0.0/20                                                                        1000
[ routes ]
+default-route-18315be8e7b4506f                                   bootstrap-e2e  10.138.0.0/20                                                                        1000
+default-route-1a9d26886d61b2a3                                   bootstrap-e2e  10.140.0.0/20                                                                        1000
+default-route-2052d39d801a787c                                   bootstrap-e2e  10.146.0.0/20                                                                        1000
[ routes ]
+default-route-9e84ecd4ee81f58a                                   bootstrap-e2e  10.132.0.0/20                                                                        1000
+default-route-abadfe52ee04e7d7                                   bootstrap-e2e  10.148.0.0/20                                                                        1000
[ routes ]
+default-route-ea5bd0cd4f87e548                                   bootstrap-e2e  10.128.0.0/20                                                                        1000
+default-route-ee6753767eff9121                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+gke-bootstrap-e2e-f34adb23-1eb49602-2922-11e7-ab00-42010af0002b  bootstrap-e2e  10.72.5.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-cd67b7bb-snkx  1000
+gke-bootstrap-e2e-f34adb23-1f847105-2922-11e7-ab00-42010af0002b  bootstrap-e2e  10.72.8.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-24a749f9-f7fz  1000
+gke-bootstrap-e2e-f34adb23-1fbc7dde-2922-11e7-ab00-42010af0002b  bootstrap-e2e  10.72.6.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-a5c690bd-njg8  1000
+gke-bootstrap-e2e-f34adb23-1ff04f13-2922-11e7-ab00-42010af0002b  bootstrap-e2e  10.72.0.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-a5c690bd-8vvx  1000
+gke-bootstrap-e2e-f34adb23-209bdd5e-2922-11e7-ab00-42010af0002b  bootstrap-e2e  10.72.1.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-cd67b7bb-n87v  1000
+gke-bootstrap-e2e-f34adb23-2168ca5d-2922-11e7-ab00-42010af0002b  bootstrap-e2e  10.72.7.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-a5c690bd-mkb1  1000
+gke-bootstrap-e2e-f34adb23-2193a2fc-2922-11e7-ab00-42010af0002b  bootstrap-e2e  10.72.2.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-24a749f9-34b2  1000
+gke-bootstrap-e2e-f34adb23-2197f3ef-2922-11e7-ab00-42010af0002b  bootstrap-e2e  10.72.3.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-cd67b7bb-6tv7  1000
+gke-bootstrap-e2e-f34adb23-22128b12-2922-11e7-ab00-42010af0002b  bootstrap-e2e  10.72.4.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-24a749f9-20dl  1000
[ firewall-rules ]
+NAME                            NETWORK        SRC_RANGES        RULES                         SRC_TAGS  TARGET_TAGS
+gke-bootstrap-e2e-f34adb23-all  bootstrap-e2e  10.72.0.0/14      sctp,tcp,udp,icmp,esp,ah
+gke-bootstrap-e2e-f34adb23-ssh  bootstrap-e2e  35.184.200.15/32  tcp:22                                  gke-bootstrap-e2e-f34adb23-node
+gke-bootstrap-e2e-f34adb23-vms  bootstrap-e2e  10.128.0.0/9      udp:1-65535,icmp,tcp:1-65535            gke-bootstrap-e2e-f34adb23-node

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|GCEPD: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/8005/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
Expected error:
    <*errors.errorString | 0xc4203c51d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume_util.go:178

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:55
failed to execute command in pod test-pod, container busybox-2: error dialing backend: read tcp 10.240.0.11:36398->35.188.55.115:22: read: connection reset by peer
Expected error:
    <*errors.StatusError | 0xc42123db00>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "error dialing backend: read tcp 10.240.0.11:36398->35.188.55.115:22: read: connection reset by peer",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    error dialing backend: read tcp 10.240.0.11:36398->35.188.55.115:22: read: connection reset by peer
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:107

Issues about this test specifically: #37502

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-pgh11" to be ready
Expected error:
    <*errors.errorString | 0xc4203e4ca0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32644

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|GCEPD: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/8135/
Multiple broken tests:

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:43.059: Couldn't delete ns: "e2e-tests-disruption-x9p0x": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-disruption-x9p0x/horizontalpodautoscalers\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-disruption-x9p0x/horizontalpodautoscalers\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc42119acd0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Certificates API should support building a client with a CSR {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:20.047: Couldn't delete ns: "e2e-tests-certificates-r6z10": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-certificates-r6z10/ingresses\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get ingresses.extensions) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-certificates-r6z10/ingresses\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get ingresses.extensions)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc42086f680), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:29.918: Couldn't delete ns: "e2e-tests-containers-1681s": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-containers-1681s/replicationcontrollers\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get replicationcontrollers) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-containers-1681s/replicationcontrollers\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get replicationcontrollers)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc4213ed4f0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Issues about this test specifically: #29467

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc420cc8680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-proxy-6s92v/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-proxy-6s92v/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-proxy-6s92v/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #35422

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:36.450: Couldn't delete ns: "e2e-tests-pod-network-test-6s2c6": an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/e2e-tests-pod-network-test-6s2c6/roles\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get roles.rbac.authorization.k8s.io) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/e2e-tests-pod-network-test-6s2c6/roles\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get roles.rbac.authorization.k8s.io)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc4214fe5f0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Multi-AZ Clusters should spread the pods of a replication controller across zones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc421765e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-multi-az-nfnp2/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-multi-az-nfnp2/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-multi-az-nfnp2/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #34247

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc4215e4000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-init-container-m4q71/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-init-container-m4q71/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-init-container-m4q71/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31408

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:35.998: Couldn't delete ns: "e2e-tests-metrics-grabber-gmnqd": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-metrics-grabber-gmnqd/statefulsets\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get statefulsets.apps) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-metrics-grabber-gmnqd/statefulsets\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get statefulsets.apps)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc4203316d0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply apply set/view last-applied {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:28.380: Couldn't delete ns: "e2e-tests-kubectl-ngg7m": an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/e2e-tests-kubectl-ngg7m/rolebindings\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get rolebindings.rbac.authorization.k8s.io) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/e2e-tests-kubectl-ngg7m/rolebindings\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get rolebindings.rbac.authorization.k8s.io)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420d2c730), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:49:35.019: Couldn't delete ns: "e2e-tests-gcl-logging-0vks4": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-gcl-logging-0vks4\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (delete namespaces e2e-tests-gcl-logging-0vks4) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-gcl-logging-0vks4\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (delete namespaces e2e-tests-gcl-logging-0vks4)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc4202d2690), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241 #43425

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc42160cb80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-statefulset-97gz8/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-statefulset-97gz8/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-statefulset-97gz8/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:36.090: Couldn't delete ns: "e2e-tests-port-forwarding-6z74k": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-port-forwarding-6z74k/horizontalpodautoscalers\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-port-forwarding-6z74k/horizontalpodautoscalers\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc421008140), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|GCEPD: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
Expected error:
    <*errors.StatusError | 0xc42028cf80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get replicationcontrollers rc-light)",
            Reason: "Unauthorized",
            Details: {
                Name: "rc-light",
                Group: "",
                Kind: "replicationcontrollers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get replicationcontrollers rc-light)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:381

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc421758980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-configmap-fxtx5/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-fxtx5/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-fxtx5/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.StatusError | 0xc421563880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pod-network-test-9gszt/pods/netserver-1\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get pods netserver-1)",
            Reason: "InternalError",
            Details: {
                Name: "netserver-1",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-network-test-9gszt/pods/netserver-1\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-network-test-9gszt/pods/netserver-1\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get pods netserver-1)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #32375

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:47.228: Couldn't delete ns: "e2e-tests-pods-bskdz": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-bskdz/limitranges\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get limitranges) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pods-bskdz/limitranges\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get limitranges)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420f0c8c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Issues about this test specifically: #35793

Failed: [k8s.io] PrivilegedPod should enable privileged commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc4217a1c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-e2e-privileged-pod-7mcfg/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-e2e-privileged-pod-7mcfg/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-e2e-privileged-pod-7mcfg/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:52.275: Couldn't delete ns: "e2e-tests-limitrange-kft69": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-limitrange-kft69/replicationcontrollers\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get replicationcontrollers) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-limitrange-kft69/replicationcontrollers\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get replicationcontrollers)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420e08910), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Issues about this test specifically: #27503

Failed: [k8s.io] Projected should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:28.952: Couldn't delete ns: "e2e-tests-projected-lb8x8": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-projected-lb8x8/persistentvolumeclaims\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get persistentvolumeclaims) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-projected-lb8x8/persistentvolumeclaims\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get persistentvolumeclaims)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420ec6be0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: DiffResources {e2e.go}

Error: 36 leaked resources
[ instance-templates ]
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-2bcdc843  n1-standard-2               2017-04-27T18:37:24.064-07:00
+gke-bootstrap-e2e-default-pool-677c3c4a  n1-standard-2               2017-04-27T18:37:23.943-07:00
+gke-bootstrap-e2e-default-pool-ad184339  n1-standard-2               2017-04-27T18:37:23.894-07:00
[ instance-groups ]
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-2bcdc843-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
[ instances ]
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
+gke-bootstrap-e2e-default-pool-2bcdc843-h5b1  us-central1-f  n1-standard-2               10.128.0.10  104.197.158.107  RUNNING
+gke-bootstrap-e2e-default-pool-2bcdc843-q3tt  us-central1-f  n1-standard-2               10.128.0.9   35.184.120.16    RUNNING
+gke-bootstrap-e2e-default-pool-2bcdc843-zf0h  us-central1-f  n1-standard-2               10.128.0.8   104.154.223.218  RUNNING
[ disks ]
+NAME                                                             ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-76d0-pvc-a8955492-2bb4-11e7-9bfc-42010af00026  us-central1-f  1        pd-standard  READY
+gke-bootstrap-e2e-default-pool-2bcdc843-h5b1                     us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-2bcdc843-q3tt                     us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-2bcdc843-zf0h                     us-central1-f  100      pd-standard  READY
[ routes ]
+default-route-0658953acc9aae12                                   bootstrap-e2e  10.142.0.0/20                                                                        1000
[ routes ]
+default-route-10c09a0f03b8a728                                   bootstrap-e2e  10.138.0.0/20                                                                        1000
[ routes ]
+default-route-4e34752e9bf4c76a                                   bootstrap-e2e  10.140.0.0/20                                                                        1000
+default-route-5f4882f815c6db42                                   bootstrap-e2e  10.148.0.0/20                                                                        1000
[ routes ]
+default-route-6f959bc560657d91                                   bootstrap-e2e  10.146.0.0/20                                                                        1000
[ routes ]
+default-route-cf8f1ac7cfb7242a                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
+default-route-d55337b574cb2583                                   bootstrap-e2e  10.128.0.0/20                                                                        1000
+default-route-d7a7c44fd7bc166a                                   bootstrap-e2e  10.132.0.0/20                                                                        1000
[ routes ]
+gke-bootstrap-e2e-76d0f729-747e4819-2bb3-11e7-9bfc-42010af00026  bootstrap-e2e  10.72.6.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-ad184339-wzn5  1000
+gke-bootstrap-e2e-76d0f729-7538073e-2bb3-11e7-9bfc-42010af00026  bootstrap-e2e  10.72.0.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-ad184339-vjf4  1000
+gke-bootstrap-e2e-76d0f729-77fd6fc2-2bb3-11e7-9bfc-42010af00026  bootstrap-e2e  10.72.1.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-ad184339-xq2g  1000
+gke-bootstrap-e2e-76d0f729-782f1985-2bb3-11e7-9bfc-42010af00026  bootstrap-e2e  10.72.2.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-677c3c4a-v6lf  1000
+gke-bootstrap-e2e-76d0f729-78b892b8-2bb3-11e7-9bfc-42010af00026  bootstrap-e2e  10.72.7.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-2bcdc843-h5b1  1000
+gke-bootstrap-e2e-76d0f729-78cf3edd-2bb3-11e7-9bfc-42010af00026  bootstrap-e2e  10.72.8.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-2bcdc843-zf0h  1000
+gke-bootstrap-e2e-76d0f729-7a1687a1-2bb3-11e7-9bfc-42010af00026  bootstrap-e2e  10.72.3.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-677c3c4a-l2d2  1000
+gke-bootstrap-e2e-76d0f729-7c87ac82-2bb3-11e7-9bfc-42010af00026  bootstrap-e2e  10.72.4.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-2bcdc843-q3tt  1000
+gke-bootstrap-e2e-76d0f729-7ccb7f23-2bb3-11e7-9bfc-42010af00026  bootstrap-e2e  10.72.5.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-677c3c4a-5r67  1000
[ firewall-rules ]
+NAME                            NETWORK        SRC_RANGES        RULES                         SRC_TAGS  TARGET_TAGS
+gke-bootstrap-e2e-76d0f729-all  bootstrap-e2e  10.72.0.0/14      sctp,tcp,udp,icmp,esp,ah
+gke-bootstrap-e2e-76d0f729-ssh  bootstrap-e2e  35.188.35.214/32  tcp:22                                  gke-bootstrap-e2e-76d0f729-node
+gke-bootstrap-e2e-76d0f729-vms  bootstrap-e2e  10.128.0.0/9      tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-76d0f729-node

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: [k8s.io] Volumes [Volume] [k8s.io] ConfigMap should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc421812c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-volume-tbn9v/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-volume-tbn9v/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-volume-tbn9v/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc420088a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-v6l84/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-v6l84/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-v6l84/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.StatusError | 0xc421596380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pod-network-test-r6bhf/pods/netserver-1\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get pods netserver-1)",
            Reason: "InternalError",
            Details: {
                Name: "netserver-1",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-network-test-r6bhf/pods/netserver-1\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-network-test-r6bhf/pods/netserver-1\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get pods netserver-1)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:92
Expected error:
    <*errors.errorString | 0xc421166690>: {
        s: "an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-statefulset-b42b8/statefulsets/ss\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (delete statefulsets.apps ss)\nTimeout waiting for pvc deletion.\nTimeout waiting for pv provisioner to delete pvs, this might mean the test leaked pvs.",
    }
    an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-statefulset-b42b8/statefulsets/ss\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (delete statefulsets.apps ss)
    Timeout waiting for pvc deletion.
    Timeout waiting for pv provisioner to delete pvs, this might mean the test leaked pvs.
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:466

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:23.415: Couldn't delete ns: "e2e-tests-statefulset-bgp3n": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-statefulset-bgp3n/resourcequotas\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get resourcequotas) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-statefulset-bgp3n/resourcequotas\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get resourcequotas)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc4203291d0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:21.491: Couldn't delete ns: "e2e-tests-pv-bqf9t": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pv-bqf9t\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (delete namespaces e2e-tests-pv-bqf9t) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pv-bqf9t\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (delete namespaces e2e-tests-pv-bqf9t)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc42163f400), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: DumpClusterLogs {e2e.go}

error during ./cluster/log-dump.sh /workspace/_artifacts: exit status 1

Issues about this test specifically: #33722 #37578 #38206 #40455 #42934 #43155 #44504

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:92
Expected error:
    <*errors.errorString | 0xc42031e500>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:381

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc421006000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-projected-0glwz/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-projected-0glwz/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-projected-0glwz/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

Failed: [k8s.io] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:34.649: Couldn't delete ns: "e2e-tests-pods-6lvxv": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-6lvxv/secrets\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get secrets) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pods-6lvxv/secrets\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get secrets)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc42127c320), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Apr 27 18:49:22.279: Timed out waiting for service sourceip-test in namespace e2e-tests-services-vxp98 to expose endpoints map[echoserver-sourceip:[8080]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Secrets optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:49:36.362: Couldn't delete ns: "e2e-tests-secrets-597pt": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-secrets-597pt\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (delete namespaces e2e-tests-secrets-597pt) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-secrets-597pt\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (delete namespaces e2e-tests-secrets-597pt)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc42121e4b0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:23.071: Couldn't delete ns: "e2e-tests-secrets-3g7kl": an error on the server ("Internal Server Error: \"/apis/policy/v1beta1/namespaces/e2e-tests-secrets-3g7kl/poddisruptionbudgets\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get poddisruptionbudgets.policy) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/policy/v1beta1/namespaces/e2e-tests-secrets-3g7kl/poddisruptionbudgets\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get poddisruptionbudgets.policy)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc4211f04b0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Apr 27 18:48:33.043: Couldn't delete ns: "e2e-tests-emptydir-94s9h": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-94s9h/resourcequotas\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get resourcequotas) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-94s9h/resourcequotas\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get resourcequotas)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420c98780), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:125
Expected error:
    <*errors.StatusError | 0xc420843000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-projected-kjkms/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-projected-kjkms/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-projected-kjkms/serviceaccounts?fieldSelector=metadata.name%3Ddefault&watch=true\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:195

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/8217/
Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:423
Apr 28 18:06:15.204: Failed waiting for stateful set status.replicas updated to 0: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:382

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:344
Expected error:
    <*errors.errorString | 0xc42029a730>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:343

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|GCEPD: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Certificates API should support building a client with a CSR {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/certificates.go:120
Expected error:
    <*errors.errorString | 0xc4202eca10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/certificates.go:103

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/8351/
Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 35 leaked resources
[ instance-templates ]
+NAME                                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-27cee477  n1-standard-2               2017-05-01T00:34:42.924-07:00
+gke-bootstrap-e2e-default-pool-9b30900d  n1-standard-2               2017-05-01T00:34:42.901-07:00
+gke-bootstrap-e2e-default-pool-fb6dbc30  n1-standard-2               2017-05-01T00:34:42.887-07:00
[ instance-groups ]
+NAME                                         LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+gke-bootstrap-e2e-default-pool-9b30900d-grp  us-central1-f  zone   bootstrap-e2e  Yes      3
[ instances ]
+NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
+gke-bootstrap-e2e-default-pool-9b30900d-9mw3  us-central1-f  n1-standard-2               10.128.0.10  35.188.89.99   RUNNING
+gke-bootstrap-e2e-default-pool-9b30900d-cjg0  us-central1-f  n1-standard-2               10.128.0.8   35.184.224.54  RUNNING
+gke-bootstrap-e2e-default-pool-9b30900d-slxt  us-central1-f  n1-standard-2               10.128.0.9   35.188.12.130  RUNNING
[ disks ]
+NAME                                          ZONE           SIZE_GB  TYPE         STATUS
+gke-bootstrap-e2e-default-pool-9b30900d-9mw3  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-9b30900d-cjg0  us-central1-f  100      pd-standard  READY
+gke-bootstrap-e2e-default-pool-9b30900d-slxt  us-central1-f  100      pd-standard  READY
[ routes ]
+default-route-134f1237d28d433a                                   bootstrap-e2e  10.142.0.0/20                                                                        1000
+default-route-227b21d76d8222d9                                   bootstrap-e2e  10.140.0.0/20                                                                        1000
+default-route-23fd40a09769de73                                   bootstrap-e2e  10.148.0.0/20                                                                        1000
+default-route-2fe1533b1e0a41bb                                   bootstrap-e2e  10.132.0.0/20                                                                        1000
[ routes ]
+default-route-4439850fbd5d18df                                   bootstrap-e2e  10.138.0.0/20                                                                        1000
[ routes ]
+default-route-6a47fe841579fbd6                                   bootstrap-e2e  10.146.0.0/20                                                                        1000
+default-route-787dc8c45118e4c6                                   bootstrap-e2e  10.128.0.0/20                                                                        1000
+default-route-84d1edffd330146c                                   bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                              1000
[ routes ]
+gke-bootstrap-e2e-b275918a-e4aa045b-2e40-11e7-8639-42010af00024  bootstrap-e2e  10.72.2.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-9b30900d-cjg0  1000
+gke-bootstrap-e2e-b275918a-e4b4b9cc-2e40-11e7-8639-42010af00024  bootstrap-e2e  10.72.3.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-9b30900d-slxt  1000
+gke-bootstrap-e2e-b275918a-e50a3773-2e40-11e7-8639-42010af00024  bootstrap-e2e  10.72.4.0/24   us-central1-f/instances/gke-bootstrap-e2e-default-pool-9b30900d-9mw3  1000
+gke-bootstrap-e2e-b275918a-e593cd6f-2e40-11e7-8639-42010af00024  bootstrap-e2e  10.72.5.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-fb6dbc30-rvxm  1000
+gke-bootstrap-e2e-b275918a-e6bfbc78-2e40-11e7-8639-42010af00024  bootstrap-e2e  10.72.6.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-fb6dbc30-jjbm  1000
+gke-bootstrap-e2e-b275918a-e7b572e4-2e40-11e7-8639-42010af00024  bootstrap-e2e  10.72.7.0/24   us-central1-a/instances/gke-bootstrap-e2e-default-pool-fb6dbc30-hncz  1000
+gke-bootstrap-e2e-b275918a-ed5f911a-2e40-11e7-8639-42010af00024  bootstrap-e2e  10.72.0.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-27cee477-shbh  1000
+gke-bootstrap-e2e-b275918a-edf80a81-2e40-11e7-8639-42010af00024  bootstrap-e2e  10.72.1.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-27cee477-rg7b  1000
+gke-bootstrap-e2e-b275918a-f126596c-2e40-11e7-8639-42010af00024  bootstrap-e2e  10.72.8.0/24   us-central1-b/instances/gke-bootstrap-e2e-default-pool-27cee477-3pr5  1000
[ firewall-rules ]
+NAME                            NETWORK        SRC_RANGES         RULES                         SRC_TAGS  TARGET_TAGS
+gke-bootstrap-e2e-b275918a-all  bootstrap-e2e  10.72.0.0/14       tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-b275918a-ssh  bootstrap-e2e  104.198.233.20/32  tcp:22                                  gke-bootstrap-e2e-b275918a-node
+gke-bootstrap-e2e-b275918a-vms  bootstrap-e2e  10.128.0.0/9       tcp:1-65535,udp:1-65535,icmp            gke-bootstrap-e2e-b275918a-node

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: [k8s.io] Certificates API should support building a client with a CSR {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/certificates.go:120
Expected error:
    <*errors.errorString | 0xc420297ed0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/certificates.go:103

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
May  1 00:48:11.958: Couldn't delete ns: "e2e-tests-statefulset-kvshd": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-statefulset-kvshd/deployments\": an error on the server (\"unknown\") has prevented the request from succeeding") has prevented the request from succeeding (get deployments.apps) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-statefulset-kvshd/deployments\\\": an error on the server (\\\"unknown\\\") has prevented the request from succeeding\") has prevented the request from succeeding (get deployments.apps)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc4212665f0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|GCEPD: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: DumpClusterLogs {e2e.go}

error during ./cluster/log-dump.sh /workspace/_artifacts: exit status 1

Issues about this test specifically: #33722 #37578 #38206 #40455 #42934 #43155 #44504

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/8353/
Multiple broken tests:

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:39
Expected error:
    <*errors.errorString | 0xc4201bbb40>: {
        s: "expected pod \"pod-secrets-1a597a0e-2e46-11e7-9bb5-0242ac110009\" success: gave up waiting for pod 'pod-secrets-1a597a0e-2e46-11e7-9bb5-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-1a597a0e-2e46-11e7-9bb5-0242ac110009" success: gave up waiting for pod 'pod-secrets-1a597a0e-2e46-11e7-9bb5-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2196

Failed: [k8s.io] HostPath should support r/w [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc420b9a910>: {
        s: "expected \"content of file \\\"/test-volume/test-file\\\": mount-tester new file\" in container output: Expected\n    <string>: failed to get container status {\"docker\" \"471ff4e586c90e4de44515b7395fc13f0f84d29731b46b0ee214243db3341421\"}: rpc error: code = 2 desc = Error: No such container: 471ff4e586c90e4de44515b7395fc13f0f84d29731b46b0ee214243db3341421\nto contain substring\n    <string>: content of file \"/test-volume/test-file\": mount-tester new file",
    }
    expected "content of file \"/test-volume/test-file\": mount-tester new file" in container output: Expected
        <string>: failed to get container status {"docker" "471ff4e586c90e4de44515b7395fc13f0f84d29731b46b0ee214243db3341421"}: rpc error: code = 2 desc = Error: No such container: 471ff4e586c90e4de44515b7395fc13f0f84d29731b46b0ee214243db3341421
    to contain substring
        <string>: content of file "/test-volume/test-file": mount-tester new file
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2196

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:339
Expected error:
    <*errors.errorString | 0xc4202bb450>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:414
Expected error:
    <*errors.errorString | 0xc42133b810>: {
        s: "expected pod \"pod-projected-configmaps-44c94276-2e45-11e7-bc69-0242ac110009\" success: gave up waiting for pod 'pod-projected-configmaps-44c94276-2e45-11e7-bc69-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-projected-configmaps-44c94276-2e45-11e7-bc69-0242ac110009" success: gave up waiting for pod 'pod-projected-configmaps-44c94276-2e45-11e7-bc69-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2196

Failed: [k8s.io] Projected should set DefaultMode on files [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:822
Expected error:
    <*errors.errorString | 0xc420c2aef0>: {
        s: "expected pod \"downwardapi-volume-47f34167-2e45-11e7-8824-0242ac110009\" success: gave up waiting for pod 'downwardapi-volume-47f34167-2e45-11e7-8824-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-47f34167-2e45-11e7-8824-0242ac110009" success: gave up waiting for pod 'downwardapi-volume-47f34167-2e45-11e7-8824-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2196

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|GCEPD: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Certificates API should support building a client with a CSR {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/certificates.go:120
Expected error:
    <*errors.errorString | 0xc42025f740>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/certificates.go:103

Failed: [k8s.io] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:45
Expected error:
    <*errors.errorString | 0xc4202fa680>: {
        s: "expected pod \"pod-projected-secrets-5d23caea-2e45-11e7-b838-0242ac110009\" success: gave up waiting for pod 'pod-projected-secrets-5d23caea-2e45-11e7-b838-0242ac110009' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-projected-secrets-5d23caea-2e45-11e7-b838-0242ac110009" success: gave up waiting for pod 'pod-projected-secrets-5d23caea-2e45-11e7-b838-0242ac110009' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2196

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/8363/
Multiple broken tests:

Failed: [k8s.io] Certificates API should support building a client with a CSR {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/certificates.go:120
Expected error:
    <*errors.errorString | 0xc420278ec0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/certificates.go:103

Failed: [k8s.io] ServiceAccounts should allow opting out of API token automount [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
May  1 04:50:30.427: Couldn't delete ns: "e2e-tests-svcaccounts-cs52l": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-svcaccounts-cs52l/persistentvolumeclaims\": the server could not find the requested resource") has prevented the request from succeeding (get persistentvolumeclaims) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-svcaccounts-cs52l/persistentvolumeclaims\\\": the server could not find the requested resource\") has prevented the request from succeeding (get persistentvolumeclaims)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc4207e1220), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:279

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:389
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.216.160 --kubeconfig=/workspace/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-6j6xm] []  <nil>  Error from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-6j6xm/replicationcontrollers?labelSelector=name%3Dnginx\\\": the server could not find the requested resource\") has prevented the request from succeeding (get replicationcontrollers)\n [] <nil> 0xc421390480 exit status 1 <nil> <nil> true [0xc42143e970 0xc42143e988 0xc42143e9a0] [0xc42143e970 0xc42143e988 0xc42143e9a0] [0xc42143e980 0xc42143e998] [0x182d750 0x182d750] 0xc421098ea0 <nil>}:\nCommand stdout:\n\nstderr:\nError from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-6j6xm/replicationcontrollers?labelSelector=name%3Dnginx\\\": the server could not find the requested resource\") has prevented the request from succeeding (get replicationcontrollers)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.216.160 --kubeconfig=/workspace/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-6j6xm] []  <nil>  Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-6j6xm/replicationcontrollers?labelSelector=name%3Dnginx\": the server could not find the requested resource") has prevented the request from succeeding (get replicationcontrollers)
     [] <nil> 0xc421390480 exit status 1 <nil> <nil> true [0xc42143e970 0xc42143e988 0xc42143e9a0] [0xc42143e970 0xc42143e988 0xc42143e9a0] [0xc42143e980 0xc42143e998] [0x182d750 0x182d750] 0xc421098ea0 <nil>}:
    Command stdout:
    
    stderr:
    Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-6j6xm/replicationcontrollers?labelSelector=name%3Dnginx\": the server could not find the requested resource") has prevented the request from succeeding (get replicationcontrollers)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2096

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|GCEPD: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/8395/
Multiple broken tests:

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:127
May  1 17:14:52.700: Couldn't delete ns: "e2e-tests-metrics-grabber-09hfg": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-metrics-grabber-09hfg/replicasets\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.28:57848->74.125.202.81:443: read: connection reset by peer") has prevented the request from succeeding (get replicasets.extensions) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-metrics-grabber-09hfg/replicasets\\\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.28:57848->74.125.202.81:443: read: connection reset by peer\") has prevented the request from succeeding (get replicasets.extensions)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420fd0cd0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:280

Failed: [k8s.io] Certificates API should support building a client with a CSR {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/certificates.go:120
Expected error:
    <*errors.errorString | 0xc420233b60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/certificates.go:103

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:130
Expected error:
    <*errors.errorString | 0xc4202d9350>: {
        s: "Failed to get pod \"ss-2\": an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-statefulset-05ft3/pods/ss-2\\\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.28:57848->74.125.202.81:443: read: connection reset by peer\") has prevented the request from succeeding (get pods ss-2)",
    }
    Failed to get pod "ss-2": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-statefulset-05ft3/pods/ss-2\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.28:57848->74.125.202.81:443: read: connection reset by peer") has prevented the request from succeeding (get pods ss-2)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:355

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|GCEPD: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@caseydavenport
Copy link
Member

/assign

@caseydavenport
Copy link
Member

/close

No instances since v1.6.0.

@caseydavenport
Copy link
Member

/reopen

This Issue hasn't been active in 52 days.

Wrongly assumed that this meant there were no instances of the failure in 52 days!

@caseydavenport
Copy link
Member

Looking at this a bit more, it feels like there were a LOT of disjoint failures involved which seem to not be occurring any more.

Let's close it and see if anything else crops up.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests

7 participants