Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-staging-parallel: broken test run #31855

Closed
k8s-github-robot opened this issue Sep 1, 2016 · 77 comments
Closed

kubernetes-e2e-gke-staging-parallel: broken test run #31855

k8s-github-robot opened this issue Sep 1, 2016 · 77 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Milestone

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/7963/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. labels Sep 1, 2016
@k8s-github-robot
Copy link
Author

@spxtr spxtr assigned pwittrock and unassigned spxtr Sep 2, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/8054/

Multiple broken tests:

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:18:16.837: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28006 #28866 #29613

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:82
Expected error:
    <*errors.errorString | 0xc820a023d0>: {
        s: "failed to wait for pods running: [pods \"\" not found]",
    }
    failed to wait for pods running: [pods "" not found]
not to have occurred

Issues about this test specifically: #29629

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:217
Sep  2 22:23:55.157: Failed to read from kubectl port-forward stdout: EOF

Issues about this test specifically: #26955

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47
Expected error:
    <*errors.errorString | 0xc8200ec0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #32023

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:16:33.710: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #26838

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:20:44.674: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28003

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:33:08.775: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:30:19.636: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #29511 #29987 #30238

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:28:24.986: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #26425 #26715 #28825 #28880

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:24:25.285: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:223
Sep  2 22:13:29.971: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

Issues about this test specifically: #28437 #29084 #29256 #29397

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:15:28.478: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28507 #29315

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc820b86c30>: {
        s: "failed to wait for pods running: [pods \"\" not found]",
    }
    failed to wait for pods running: [pods "" not found]
not to have occurred

Issues about this test specifically: #26509 #26834 #29780

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:26:15.420: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #29461

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:23:02.786: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:09:34.225: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:26:08.912: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #30216 #31031

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:09:36.195: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:26:17.207: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28462

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:24:55.408: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28337

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:32:18.135: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #26138 #28429 #28737

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:25:08.708: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #29197

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:21:50.169: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28084

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.errorString | 0xc820aa5800>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred

Issues about this test specifically: #27443 #27835 #28900

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:17:44.826: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] EmptyDir wrapper volumes should becomes running {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:31:59.303: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28450

Failed: [k8s.io] Pods should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:21:43.712: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:27:30.446: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28346

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:24:00.632: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #27673

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:12:55.752: Couldn't delete ns "e2e-tests-kubectl-ve33f": namespace e2e-tests-kubectl-ve33f was not deleted within limit: timed out waiting for the condition, pods remaining: [redis-slave-109403812-wzmrn]

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:16:37.118: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: ThirdParty resources Simple Third Party creating/deleting thirdparty objects works [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:29:05.768: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:10:50.770: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28426

Failed: [k8s.io] Downward API volume should provide container's cpu request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:17:45.114: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:18:29.734: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #29040

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:26:20.083: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Generated release_1_3 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:16:37.825: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28415

Failed: [k8s.io] Pods should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:17:03.599: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #29614

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:14:18.668: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:15:57.751: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #31635

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:15:10.722: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #27524

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:15:48.337: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:20:16.172: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28106

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:23:13.971: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #27532

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:401
Expected
    <string>: Waiting for pod e2e-tests-kubectl-1rr8z/run-test-jeo86 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-1rr8z/run-test-jeo86 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-1rr8z/run-test-jeo86 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-1rr8z/run-test-jeo86 to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-1rr8z/run-test-jeo86 to be running, status is Pending, pod ready: false
    Error attaching, falling back to logs: ssh: unexpected packet in response to channel open: <nil>

to contain substring
    <string>: abcd1234

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:28:30.278: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:15:01.215: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:18:53.354: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #26131

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:27:33.496: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #26682 #28884

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:16:39.187: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #30632

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:18:06.697: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #27295

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:14:05.160: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:25:08.825: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #31075

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:14:13.709: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:30:23.500: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #29066 #30592 #31065

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:19:35.039: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #28420

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:17:11.736: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:17:38.959: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:22:51.242: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:11:56.105: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:14:13.368: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:17:01.327: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:21:57.621: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #26139 #28342 #28439 #31574

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:09:45.757: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:17:03.316: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[pods:{{110 0} {<nil>} 110 DecimalSI} alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #26126 #30653

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:19:18.311: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

Issues about this test specifically: #27079

Failed: [k8s.io] Networking should function for intra-pod communication [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:225
Sep  2 22:10:14.945: Failed on attempt 6. Cleaning up. Details:
{
    "Hostname": "nettest-zrces",
    "Sent": {
        "nettest-fs6eo": 1,
        "nettest-xyohb": 1,
        "nettest-zrces": 2
    },
    "Received": {
        "nettest-fs6eo": 15,
        "nettest-zrces": 2
    },
    "Errors": null,
    "Log": [
        "e2e-tests-nettest-g5qme/nettest has 1 endpoints ([http://10.180.0.12:8080]), which is less than 3 as expected. Waiting for all endpoints to come up.",
        "e2e-tests-nettest-g5qme/nettest has 2 endpoints ([http://10.180.0.12:8080 http://10.180.1.5:8080]), which is less than 3 as expected. Waiting for all endpoints to come up.",
        "Attempting to contact http://10.180.0.12:8080",
        "Attempting to contact http://10.180.1.5:8080",
        "Attempting to contact http://10.180.2.15:8080",
        "Attempting to contact http://10.180.1.5:8080",
        "Attempting to contact http://10.180.2.15:8080"
    ],
    "StillContactingPeers": true
}

Issues about this test specifically: #26960 #27235

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:13:04.936: Couldn't delete ns "e2e-tests-kubectl-yr363": namespace e2e-tests-kubectl-yr363 was not deleted within limit: timed out waiting for the condition, pods remaining: [e2e-test-nginx-deployment-1517792476-8mass]

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  2 22:25:08.065: All nodes should be ready after test, Not ready nodes: [{{ } {gke-jenkins-e2e-default-pool-0167bac0-hgp7   /api/v1/nodes/gke-jenkins-e2e-default-pool-0167bac0-hgp7 cf5a8607-7193-11e6-9704-42010af00007 3410 0 {2016-09-02 22:03:53 -0700 PDT} <nil> <nil> map[beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-nodepool:default-pool failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-f kubernetes.io/hostname:gke-jenkins-e2e-default-pool-0167bac0-hgp7 beta.kubernetes.io/arch:amd64] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []} {10.180.2.0/24 6970975615862695871 gce://k8s-e2e-gke-staging-parallel/us-central1-f/gke-jenkins-e2e-default-pool-0167bac0-hgp7 false} {map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}] map[alpha.kubernetes.io/nvidia-gpu:{{0 0} {<nil>} 0 DecimalSI} cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{7864135680 0} {<nil>} 7679820Ki BinarySI} pods:{{110 0} {<nil>} 110 DecimalSI}]  [{NetworkUnavailable False {0001-01-01 00:00:00 +0000 UTC} {2016-09-02 22:04:36 -0700 PDT} RouteCreated RouteController created a route} {OutOfDisk Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.} {MemoryPressure False {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:03:53 -0700 PDT} KubeletHasSufficientMemory kubelet has sufficient memory available} {Ready Unknown {2016-09-02 22:07:49 -0700 PDT} {2016-09-02 22:08:33 -0700 PDT} NodeStatusUnknown Kubelet stopped posting node status.}] [{InternalIP 10.240.0.7} {ExternalIP 104.198.37.67}] {{10250}} { 91826B05-2C58-5F83-8057-EA1FDD68A52D 7ab449b3-9d48-4471-85f9-eabe8fc9aabe 3.16.0-4-amd64 Debian GNU/Linux 7 (wheezy) docker://1.11.2 v1.3.6 v1.3.6 linux amd64} [{[gcr.io/google-samples/gb-frontend:v4] 512161546} {[gcr.io/google_containers/fluentd-gcp:1.21] 498494324} {[gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/kube-proxy:2571ea5f7ab2dc42d20d2497b0a27b84] 180130561} {[gcr.io/google_containers/resource_consumer:beta4] 133500077} {[gcr.io/google_containers/heapster:v1.1.0] 121886227} {[gcr.io/google_samples/gb-redisslave:v1] 109508753} {[gcr.io/google_containers/nginx:1.7.9] 91664166} {[gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1] 55829631} {[gcr.io/google_containers/addon-resizer:1.3] 48477690} {[gcr.io/google_containers/nettest:1.9] 33873710} {[gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec:1.4] 7297019} {[gcr.io/google_containers/resource_consumer/controller:beta4] 7034235} {[gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/portforwardtester:1.0] 2296329} {[gcr.io/google_containers/mounttest:0.6] 2084693} {[gcr.io/google_containers/mounttest:0.5] 1718853} {[gcr.io/google_containers/mounttest-user:0.3] 1718853} {[gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64:3.0] 746888}] [] []}}]

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Sep 3, 2016
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @pwittrock

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/8160/

Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc821002200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-o7gap/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26131

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820bcdb80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-jr8kg/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820c2ce00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-5hzcs/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/configmap.go:279
Sep  5 03:06:43.943: Failed to create pod: the server does not allow access to the requested resource (post pods)

Issues about this test specifically: #29751 #30430

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820bc8200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-deployment-2jsqv/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.StatusError | 0xc820b41a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (delete pods execpod-q975r)",
            Reason: "Forbidden",
            Details: {
                Name: "execpod-q975r",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-services-7fch1/pods/execpod-q975r\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (delete pods execpod-q975r)
not to have occurred

Issues about this test specifically: #26128 #26685

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820899900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-containers-ekt5c/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29994

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820af0500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-node-problem-detector-fe0ko/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28069 #28168 #28343 #29656

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820c54b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-kn9bx/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28437 #29084 #29256 #29397

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82139e900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubelet-om4uz/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28106

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820d44300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-svc-latency-v6593/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #30632

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82008b500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-bfxhh/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29066 #30592 #31065

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8202cba00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-0610j/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29040

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82178b880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-5vmes/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28584 #32045

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820cb4e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-ot3xf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28462

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820ae0200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-proxy-hhyyn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc821607780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-v1job-rzfd5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #30216 #31031

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820ea4200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-events-0r55b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28346

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820c0bd80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-k82dn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29831

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1039
Sep  5 03:06:43.156: Failed getting pod e2e-test-nginx-pod: the server does not allow access to the requested resource (get pods e2e-test-nginx-pod)

Issues about this test specifically: #27507 #28275

Failed: ThirdParty resources Simple Third Party creating/deleting thirdparty objects works [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820b2e600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-thirdparty-a3pz7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820d41a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-xpu4i/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820e79300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-mozw1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29050

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820b95f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-lb4zu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #30851

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820f8db00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-wro5x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26224

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8211cd580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-q8gqg/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8215ac800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-prestop-4kd5n/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #30287

Failed: [k8s.io] hostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820908480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-hostpath-og6of/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820cd3600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-rdwsv/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82024bc80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-ml9cw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #27295

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820a10180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-replicaset-h45eo/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #32023

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:401
Expected
    <string>: Waiting for pod e2e-tests-kubectl-9tja5/run-test-1iw6q to be running, status is Pending, pod ready: false
    Waiting for pod e2e-tests-kubectl-9tja5/run-test-1iw6q to be running, status is Pending, pod ready: false
    Error attaching, falling back to logs: 

to contain substring
    <string>: abcd1234

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820d00600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-proxy-pg9jl/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26164 #26210

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820baf800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-4txmc/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29513

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:693
Expected error:
    <*errors.errorString | 0xc8216ef2d0>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.21.202 --kubeconfig=/workspace/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-vti3u] []  <nil>  Error from server: the server does not allow access to the requested resource (patch pods pause)\n [] <nil> 0xc820700260 exit status 1 <nil> true [0xc820bfc108 0xc820bfc120 0xc820bfc138] [0xc820bfc108 0xc820bfc120 0xc820bfc138] [0xc820bfc118 0xc820bfc130] [0xa96890 0xa96890] 0xc8212ce540}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource (patch pods pause)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.21.202 --kubeconfig=/workspace/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-vti3u] []  <nil>  Error from server: the server does not allow access to the requested resource (patch pods pause)
     [] <nil> 0xc820700260 exit status 1 <nil> true [0xc820bfc108 0xc820bfc120 0xc820bfc138] [0xc820bfc108 0xc820bfc120 0xc820bfc138] [0xc820bfc118 0xc820bfc130] [0xa96890 0xa96890] 0xc8212ce540}:
    Command stdout:

    stderr:
    Error from server: the server does not allow access to the requested resource (patch pods pause)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <errors.aggregate | len:3, cap:4>: [
        {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {SelfLink: "", ResourceVersion: ""},
                Status: "Failure",
                Message: "the server does not allow access to the requested resource (get replicasets.extensions test-rollover-controller)",
                Reason: "Forbidden",
                Details: {
                    Name: "test-rollover-controller",
                    Group: "extensions",
                    Kind: "replicasets",
                    Causes: [
                        {
                            Type: "UnexpectedServerResponse",
                            Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-ry7e8/replicasets/test-rollover-controller\"",
                            Field: "",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 403,
            },
        },
        {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {SelfLink: "", ResourceVersion: ""},
                Status: "Failure",
                Message: "the server does not allow access to the requested resource (get replicasets.extensions test-rollover-deployment-3098649686)",
                Reason: "Forbidden",
                Details: {
                    Name: "test-rollover-deployment-3098649686",
                    Group: "extensions",
                    Kind: "replicasets",
                    Causes: [
                        {
                            Type: "UnexpectedServerResponse",
                            Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-ry7e8/replicasets/test-rollover-deployment-3098649686\"",
                            Field: "",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 403,
            },
        },
        {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {SelfLink: "", ResourceVersion: ""},
                Status: "Failure",
                Message: "the server does not allow access to the requested resource (get replicasets.extensions test-rollover-deployment-3104088211)",
                Reason: "Forbidden",
                Details: {
                    Name: "test-rollover-deployment-3104088211",
                    Group: "extensions",
                    Kind: "replicasets",
                    Causes: [
                        {
                            Type: "UnexpectedServerResponse",
                            Message: "Forbidden: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-ry7e8/replicasets/test-rollover-deployment-3104088211\"",
                            Field: "",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 403,
            },
        },
    ]
    [the server does not allow access to the requested resource (get replicasets.extensions test-rollover-controller), the server does not allow access to the requested resource (get replicasets.extensions test-rollover-deployment-3098649686), the server does not allow access to the requested resource (get replicasets.extensions test-rollover-deployment-3104088211)]
not to have occurred

Issues about this test specifically: #26509 #26834 #29780

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8209b1f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-gsd89/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:279
Expected error:
    <*errors.errorString | 0xc820a97f50>: {
        s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.21.202 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-lclyv] []  0xc8208f58a0  Error from server: error when stopping \"STDIN\": the server does not allow access to the requested resource (delete pods nginx)\n [] <nil> 0xc8208f5f00 exit status 1 <nil> true [0xc820034120 0xc820034170 0xc820034180] [0xc820034120 0xc820034170 0xc820034180] [0xc820034128 0xc820034168 0xc820034178] [0xa96730 0xa96890 0xa96890] 0xc820c57ec0}:\nCommand stdout:\n\nstderr:\nError from server: error when stopping \"STDIN\": the server does not allow access to the requested resource (delete pods nginx)\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.21.202 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-lclyv] []  0xc8208f58a0  Error from server: error when stopping "STDIN": the server does not allow access to the requested resource (delete pods nginx)
     [] <nil> 0xc8208f5f00 exit status 1 <nil> true [0xc820034120 0xc820034170 0xc820034180] [0xc820034120 0xc820034170 0xc820034180] [0xc820034128 0xc820034168 0xc820034178] [0xa96730 0xa96890 0xa96890] 0xc820c57ec0}:
    Command stdout:

    stderr:
    Error from server: error when stopping "STDIN": the server does not allow access to the requested resource (delete pods nginx)

    error:
    exit status 1

not to have occurred

Issues about this test specifically: #27156 #28979 #30489

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820dbf080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-v52cw/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29224 #32008

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820749600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-d4mtt/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29511 #29987 #30238

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8212a6e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-wc44t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #31400

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820d3db00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-secrets-xnja4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #32025

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820f49380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-cfgk7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820f3e100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-ze9w5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820e49480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-txlqm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8208bfb80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-opxco/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8215ac300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-h4qji/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc821216800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-containers-dxkm7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29467

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82081f900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-1v291/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28003

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
    <*errors.StatusError | 0xc82026a300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (post horizontalPodAutoscalers.autoscaling)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "autoscaling",
                Kind: "horizontalPodAutoscalers",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/apis/autoscaling/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-v5k0n/horizontalpodautoscalers\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (post horizontalPodAutoscalers.autoscaling)
not to have occurred

Issues about this test specifically: #27443 #27835 #28900

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820c13800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-u5zr0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #27195

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820890200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-container-probe-clpca/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29521

Failed: [k8s.io] Downward API volume should provide container's cpu request {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8209fa900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-3tbqb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820096100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-nettest-xlpp3/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820d7e280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-2f6h7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28523

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82161e780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-2r80k/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28064 #28569

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820b51800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-volume-provisioning-p9r2a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26682 #28884

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820eb2380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-v1job-sfl08/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820247980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-v1job-hyrqp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820539900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-wcypp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28773 #29506 #30699

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820d83f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-ps99t/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29461

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820cff400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-2todu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26870

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8214acf00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-configmap-poiv5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #27245

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820e20800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-v1job-icwwj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820d44b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-deployment-74tm5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #27232

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820c3a180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-configmap-sotm8/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29052

Failed: [k8s.io] Downward API volume should provide container's cpu limit {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8200d9800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-kz9ii/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820de1680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-n1dev/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28006 #28866 #29613

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820e49300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-sfobl/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  5 03:06:49.588: Couldn't delete ns "e2e-tests-nettest-pm8kj": the server does not allow access to the requested resource (delete namespaces e2e-tests-nettest-pm8kj)

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820cede00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-kp06p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26172

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820c5cf00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-proxy-p88um/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc821560a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-configmap-w7f35/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8209db400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-svcaccounts-qk7he/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820db2b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-hxvve/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26126 #30653

Failed: [k8s.io] Downward API should provide pod IP as an env var {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8217d7180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-661zn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820f03f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-8yxpl/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28503

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:303
Expected error:
    <*errors.StatusError | 0xc820745880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get resourceQuotas test-quota)",
            Reason: "Forbidden",
            Details: {
                Name: "test-quota",
                Group: "",
                Kind: "resourceQuotas",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-resourcequota-07g5e/resourcequotas/test-quota\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get resourceQuotas test-quota)
not to have occurred

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc820af2ab0>: {
        s: "Error creating replication controller: the server does not allow access to the requested resource (post replicationControllers)",
    }
    Error creating replication controller: the server does not allow access to the requested resource (post replicationControllers)
not to have occurred

Issues about this test specifically: #27196 #28998

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8211a3300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-4euk4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #32053

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820d13b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-cadvisor-t66rm/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8212ccb80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-a11li/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26425 #26715 #28825 #28880

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820252c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-iuruj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820dcab00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-replicaset-6mrfb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #30981

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82088f400>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-emptydir-ju5m4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820c8d480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-v1job-v5il2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820e62600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-dns-2hb10/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26194 #26338 #30345

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820914100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-port-forwarding-8q6tn/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #27680

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820f1ef80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-deployment-b820o/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28067 #28378

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820c5fb00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubernetes-dashboard-shvs0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26191

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820d83380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-v1job-7b67y/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Generated release_1_2 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82174d100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-clientset-1x4v6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #32043

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Sep  5 03:06:43.753: Couldn't delete ns "e2e-tests-kubectl-i5s92": the server does not allow access to the requested resource (delete namespaces e2e-tests-kubectl-i5s92)

Issues about this test specifically: #28420

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820685980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-sxyhb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26728 #28266 #30340

Failed: [k8s.io] Mesos schedules pods annotated with roles on correct slaves {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82128df80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-hnsqr/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820e63b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-bkic1/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication on a single node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:244
Expected error:
    <*errors.StatusError | 0xc8201f1480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get pods)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-nettest-oewpi/pods?fieldSelector=metadata.name%3Dsame-node-webserver\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get pods)
not to have occurred

Issues about this test specifically: #28827 #31867

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:140
Expected error:
    <*errors.StatusError | 0xc820096800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (delete pods pod2)",
            Reason: "Forbidden",
            Details: {
                Name: "pod2",
                Group: "",
                Kind: "pods",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-services-ctop0/pods/pod2\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (delete pods pod2)
not to have occurred

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820300200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-monitoring-5npim/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29647

@k8s-github-robot k8s-github-robot added priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Sep 5, 2016
@pwittrock pwittrock assigned jessfraz and unassigned pwittrock Sep 6, 2016
@ixdy
Copy link
Member

ixdy commented Sep 6, 2016

wow that's a lot of failures.

@jfrazelle are you actively working on this? do you need more hands to tackle it?

@derekwaynecarr
Copy link
Member

many of these look like integration issues with whatever auth proxy is integrated with your builders...

@jessfraz
Copy link
Contributor

jessfraz commented Sep 8, 2016

@ixdy ya im working on it, i think 1 error then dominoes all the others

@fejta fejta added sig/auth Categorizes an issue or PR as relevant to SIG Auth. and removed area/test-infra labels Sep 9, 2016
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jfrazelle

This flaky-test issue would love to have more attention.

k8s-github-robot pushed a commit that referenced this issue Sep 12, 2016
Automatic merge from submit-queue

test/e2e: up the timeout on AllNodesReady

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**: help with flake issue #31855 

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
NONE
```

This is not the most glamorous fix, but...
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jfrazelle

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/8577/

Multiple broken tests:

Failed: [k8s.io] Pods should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1168
starting pod liveness-http in namespace e2e-tests-pods-5ivs1
Expected error:
    <*errors.errorString | 0xc82007bf80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #29614

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
waiting for server pod to start
Expected error:
    <*errors.errorString | 0xc8200d1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #30287

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:72
Expected error:
    <*errors.errorString | 0xc820b24420>: {
        s: "gave up waiting for pod 'client-containers-df900ee7-7a17-11e6-8b1f-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'client-containers-df900ee7-7a17-11e6-8b1f-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred

Issues about this test specifically: #29467

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:51
Expected error:
    <*errors.errorString | 0xc820879340>: {
        s: "gave up waiting for pod 'client-containers-98df6921-7a17-11e6-afc0-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'client-containers-98df6921-7a17-11e6-afc0-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jfrazelle

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/8665/

Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82088c580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-v1job-vhjh2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82034e280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-9ka07/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28003

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8209c1e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-prestop-nvjfd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #30287

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82098d680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-deployment-b263z/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28339

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82019bc00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-g7qsb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29831

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8203c8d00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-configmap-kb0vk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #27079

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8208a0980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-upyda/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8209b1e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-d5dre/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26139 #28342 #28439 #31574

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820763680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-uod5y/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26209 #29227 #32132

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820290900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-chptu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #32087

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820952300>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-bm8vl/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28584 #32045

Failed: [k8s.io] Pods should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8208d3000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-pods-ui8gp/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28332

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820a73e80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-wnbnf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28371 #29604

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8209a3680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-svcaccounts-d7d6i/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820552380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-qcf3p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #28420

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820201600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-dns-vumh7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82086d000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-kubectl-z36o5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26126 #30653

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820265b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-containers-3713f/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8209a0f80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-services-sss41/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc82053c500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-containers-pw1le/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #29994

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc8200fdf00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-nettest-5p29l/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820740e00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-downward-api-8sa7x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #31836

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820751c00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-secrets-3wmfx/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.StatusError | 0xc820329d80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server does not allow access to the requested resource (get serviceAccounts)",
            Reason: "Forbidden",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceAccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-proxy-o84bj/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred

Issues about this test specifically: #32089

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jessfraz

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jessfraz

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jessfraz

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/10764/

Multiple broken tests:

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1071
Oct 29 21:43:30.783: expected un-ready endpoint for Service webserver within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1069

Issues about this test specifically: #26172

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.errorString | 0xc820cb2070>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-10-29 21:39:35 -0700 PDT FinishedAt:2016-10-29 21:40:05 -0700 PDT ContainerID:docker://07ad2893ae1e916fa4ac4d0eea6f3d47cac44f340fa24e2d2e23e28612f42f2b}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:2016-10-29 21:39:35 -0700 PDT FinishedAt:2016-10-29 21:40:05 -0700 PDT ContainerID:docker://07ad2893ae1e916fa4ac4d0eea6f3d47cac44f340fa24e2d2e23e28612f42f2b}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:54

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc8201ae760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:538
Oct 29 21:38:13.998: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:535

Issues about this test specifically: #28420

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Oct 29 21:47:08.906: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1509

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Oct 29 21:52:58.331: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:284

Issues about this test specifically: #27443 #27835 #28900 #32512

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc8201ae760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc8201aa880>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200fd6a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jessfraz

This flaky-test issue would love to have more attention.

1 similar comment
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jessfraz

This flaky-test issue would love to have more attention.

@calebamiles
Copy link
Contributor

we might be able to close this one @saad-ali, the last test failure was 29 October 2016 and we've had several green runs since. What do you think @jessfraz?

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jessfraz

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jessfraz

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jessfraz

This flaky-test issue would love to have more attention.

@calebamiles calebamiles added this to the v1.5 milestone Nov 9, 2016
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jessfraz

This flaky-test issue would love to have more attention.

1 similar comment
@k8s-github-robot
Copy link
Author

[FLAKE-PING] @jessfraz

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

@saad-ali
Copy link
Member

we might be able to close this one @saad-ali, the last test failure was 29 October 2016 and we've had several green runs since. What do you think @jessfraz?

Ack

@mikedanese mikedanese removed the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label May 22, 2019
@k8s-ci-robot
Copy link
Contributor

@k8s-github-robot: There are no sig labels on this issue. Please add a sig label by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

No branches or pull requests