Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gce-etcd3: broken test run #37923

Closed
k8s-github-robot opened this issue Dec 2, 2016 · 103 comments
Closed

ci-kubernetes-e2e-gce-etcd3: broken test run #37923

k8s-github-robot opened this issue Dec 2, 2016 · 103 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/2214/

Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:55:39.084: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420726278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:39:53.154: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421754278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] CronJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:56:11.505: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f16278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:32:01.357: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d79678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:35:46.783: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421166c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29657

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:43:05.930: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420105678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:39:09.306: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b61678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:36:33.082: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e46278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:58:17.327: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212ad678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37274

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:38:37.094: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a81678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:32:06.661: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420fdec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:42:20.039: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421494278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:45:37.964: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42095f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:36:20.338: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42139c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:35:17.103: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420968278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:45:42.649: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212dcc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203be090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:56:30.192: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420bc8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:58:44.929: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a2ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:59:50.714: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4201a9678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 23:06:13.198: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420de4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36109

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:35:42.624: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c67678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:50:48.704: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42110ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31400

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:54:48.405: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f61678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:32:30.549: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212e2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:56:38.030: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420302c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29050

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:48:50.171: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a34278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36183

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 23:02:08.650: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42134ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 23:04:35.238: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421690c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:33:11.397: Couldn't delete ns: "e2e-tests-disruption-q2dl0": namespace e2e-tests-disruption-q2dl0 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-q2dl0 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:39:41.751: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42064b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:48:17.985: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f61678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32122

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:52:35.870: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420970c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35256

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:57:57.368: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4215b2278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:507
Expected error:
    <*errors.errorString | 0xc42043ecb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:35:45.404: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42072c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:42:24.120: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420baf678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:33:00.220: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420998278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28084

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:32:16.534: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421483678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:58:04.635: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420417678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:36:15.657: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420771678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:46:07.192: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420970c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:50:45.000: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421297678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:49:30.386: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421238278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:45:31.159: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420336278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:46:23.573: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420eff678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:45:44.225: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ff3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:43:33.277: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209bcc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 23:03:02.934: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42095cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37529

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:45:03.667: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420417678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:33:59.602: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421372c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:52:08.329: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4215f5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34658

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:32:13.814: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421007678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:44:22.608: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208f0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:32:12.840: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421148c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 23:00:01.609: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420eff678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:57:17.537: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c0f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27443 #27835 #28900 #32512

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:55:05.219: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a8ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:42:24.346: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4201eec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27232

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:44:33.704: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f52278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28503

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 23:01:59.604: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210a6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34520

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:359
Expected error:
    <*errors.errorString | 0xc42069c140>: {
        s: "service verification failed for: 10.0.189.44\nexpected [service1-8kkh7 service1-lxrhg service1-pb4fw]\nreceived []",
    }
    service verification failed for: 10.0.189.44
    expected [service1-8kkh7 service1-lxrhg service1-pb4fw]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:52:49.850: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b9a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:45:34.472: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42107c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:58:51.266: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421534278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: [k8s.io] CronJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 23:03:02.842: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42101b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 23:01:08.013: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4218cf678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:39:41.461: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42064b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:53:27.603: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420bbcc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36242

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:51:41.275: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4216c4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:36:31.100: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210d8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:39:11.825: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421226278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:43:09.258: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4213d2278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28346

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:47:21.507: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d75678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:39:08.391: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421090278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203d34d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:49:33.544: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42010cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:48:56.107: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4215cec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27680

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:49:35.799: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210a2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:36:12.936: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209e4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30632

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:48:27.219: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210df678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:51:30.168: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209fe278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:49:21.403: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421071678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 23:03:45.194: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421414278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:52:45.869: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421035678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:35:43.933: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420947678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:55:32.660: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421052c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30981

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:42:21.744: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ed3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:59:53.770: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e12278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27245

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:42:56.836: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212f4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34687

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  1 22:47:34.827: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e13678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Previous issues for this suite: #36929 #37213

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 2, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/2226/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:44:17.836: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421269b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:55:50.729: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421530768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:43:42.126: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42091c768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:40:46.311: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f0e768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:57:04.043: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e9c768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:48:25.551: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f7e768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:36:00.068: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421188768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:37:15.976: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421024768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27443 #27835 #28900 #32512

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:34:21.236: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4206b5168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37515

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:37:43.688: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e97168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:39:44.836: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f50768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:37:58.816: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420abdb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31697 #36574

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:37:49.761: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420bb1b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:29:48.370: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f1e768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:33:33.336: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e7fb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:54:36.266: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421705168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] CronJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:35:05.575: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212dfb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:44:28.633: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209c0768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:57:13.544: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420dab168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30632

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:44:36.259: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4214fc768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:49:28.000: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209d4768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:41:13.966: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420965b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29710

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:44:18.159: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4203e5b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27507 #28275

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:33:24.351: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420266768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:41:25.319: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421123b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:44:39.154: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421761168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:45:15.341: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a03b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:34:34.057: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421299168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:52:40.527: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420924768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32087

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:41:28.858: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209ae768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:47:31.527: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b79b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33008

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:53:43.921: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4203d7b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:33:42.042: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420bcd168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:48:10.700: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421151b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:36:45.538: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420acc768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30851

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:39:28.692: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421123b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc4203d3380>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:41:15.659: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42097a768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:39:57.769: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420220768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:34:31.462: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421236768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:39:40.143: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209b7168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:68
Expected error:
    <*errors.errorString | 0xc420e6c210>: {
        s: "failed to wait for pods running: [pods \"\" not found]",
    }
    failed to wait for pods running: [pods "" not found]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:323

Issues about this test specifically: #31075 #36286

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:30:13.440: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a55168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:33:10.691: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421128768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:54:17.240: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420764768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:56:11.293: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c58768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:39:40.230: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42109db68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:51:03.653: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420db8768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:30:05.767: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420adfb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:37:58.167: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420979b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:57:53.047: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4217e0768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:34:33.995: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ee3168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:43:56.632: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e9fb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35601

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:38:47.788: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211fdb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37531

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:44:09.445: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42090c768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:41:07.720: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212ddb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:42:55.588: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c24768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36948

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:42:01.949: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e10768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35579

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 05:03:54.974: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4214f5b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:36:34.604: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420cebb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:41:15.943: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420cebb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:59:25.389: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421289168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 05:01:08.787: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ded168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:31:05.859: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420dc7168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:37:33.423: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212fe768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:29:50.263: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421103b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:44:39.359: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e9db68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33985

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:40:56.219: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421157168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28503

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:37:42.119: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420eabb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:59:03.087: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42103a768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:47:53.222: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4217dbb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30352

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:34:42.844: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210e7b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] CronJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:45:05.985: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212fa768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:29:51.726: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f35b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37525

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:38:03.675: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421073b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 05:00:37.731: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4218fe768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:46:13.789: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420abfb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:46:23.734: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420aab168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29657

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:43:05.595: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42119e768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:51:05.429: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421581168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35793

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:30:17.866: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421067168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:30:22.423: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208ff168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:48:00.191: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420eab168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:49:45.988: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4214df168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:30:29.217: Couldn't delete ns: "e2e-tests-job-xwkl9": namespace e2e-tests-job-xwkl9 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-job-xwkl9 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #29511 #29987 #30238

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:31:11.059: Couldn't delete ns: "e2e-tests-disruption-5jc33": namespace e2e-tests-disruption-5jc33 was not deleted with limit: timed out waiting for the condition, pods remaining: 3, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-5jc33 was not deleted with limit: timed out waiting for the condition, pods remaining: 3, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32644

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:29:52.186: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e82768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:51:42.865: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421440768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29831

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203e4b50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:30:47.113: Couldn't delete ns: "e2e-tests-pod-network-test-d6zb5": namespace e2e-tests-pod-network-test-d6zb5 was not deleted with limit: timed out waiting for the condition, pods remaining: 2, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-pod-network-test-d6zb5 was not deleted with limit: timed out waiting for the condition, pods remaining: 2, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32375

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:36:22.880: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a05b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1105
Expected error:
    <*errors.errorString | 0xc420ea8240>: {
        s: "failed to wait for pods running: [pods \"\" not found]",
    }
    failed to wait for pods running: [pods "" not found]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1073

Issues about this test specifically: #26172

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:52:27.082: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e6f168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:48:54.263: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42106b168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:41:25.423: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a49168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:49:14.894: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c15168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:34:14.263: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421308768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:29:52.091: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210fa768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34520

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 05:02:19.233: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a33168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:34:00.606: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ddbb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28084

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:37:28.963: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42119db68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31873

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:31:27.783: Couldn't delete ns: "e2e-tests-limitrange-6b4q1": namespace e2e-tests-limitrange-6b4q1 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-limitrange-6b4q1 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #27503

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 05:00:20.573: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b01168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 05:05:35.460: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420faa768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:37:55.064: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f1e768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:47:19.669: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420913168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35297

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:38:06.313: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421102768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:42:44.164: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a77b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:34:06.562: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420777b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 04:51:22.582: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f6c768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37274

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/2237/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:19:20.998: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d9f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29710

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:05:25.483: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b13678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:53:06.994: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420fa8278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] CronJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:44:05.548: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421372278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:46:19.990: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f38278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33987

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:03:18.082: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421274c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32087

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:06:44.396: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ed4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31635

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:105
Dec  2 09:44:11.528: Ran echo "Hello from $(whoami)@$(hostname)" on 104.154.216.13:22, got error error getting SSH client to jenkins@104.154.216.13:22: 'timed out dialing tcp:104.154.216.13:22', expected <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:79

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:57:57.105: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42075b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26224

Failed: [k8s.io] CronJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:18:07.466: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212ca278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:01:27.963: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42070ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:68
Expected error:
    <*errors.errorString | 0xc420bd6250>: {
        s: "err waiting for DNS replicas to satisfy 2, got 3: timed out waiting for the condition",
    }
    err waiting for DNS replicas to satisfy 2, got 3: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:67

Issues about this test specifically: #36569

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:57:40.370: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ebb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:07:15.403: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4214eac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:52:35.144: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421418c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:42:34.958: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421342c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:49:20.046: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420fbec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:53:06.117: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210cc278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:55:17.920: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d42c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:56:27.385: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f58c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31075 #36286

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:50:27.127: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420744c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:06:13.666: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421225678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:44:30.195: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4206de278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:54:52.753: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210a8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:00:39.922: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42149e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:09:26.142: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42068ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:55:43.128: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ab5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:42:32.142: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211aec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:03:03.463: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4201c8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:02:41.611: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b1c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29050

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:58:43.436: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421032c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27507 #28275

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:56:39.396: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421007678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34687

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:10:55.694: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420037678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:46:12.885: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420fe0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:48:55.433: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f73678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:49:22.612: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42115ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:01:07.681: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208d4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] CronJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:43:07.711: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209a2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:06:27.013: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420124278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29197 #36289 #36598

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:45:52.415: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4200f8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27079

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:51:10.560: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42109e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:54:17.262: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a46c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:07:41.539: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421370278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29657

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:42:36.129: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420bfac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35297

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:04:19.848: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c12278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:04:24.057: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208f0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29831

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:11:09.834: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ad6c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:57:15.750: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4206fe278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37525

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:54:07.607: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420eba278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:51:32.853: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f72c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:43:09.601: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4206bac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:59:17.193: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a5c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:46:21.878: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211e0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32646

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:52:14.819: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420edf678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:46:38.927: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420af5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32054 #36010

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:59:03.128: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a2c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:45:55.434: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421452278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:02:09.919: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42098f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:45:42.548: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421202278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:59:45.377: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e80278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240
wait for pod "pod-service-account-3321bda5-b8b6-11e6-8609-0242ac110005-txgt0" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc420415e60>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #37526

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:45:45.438: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209bc278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:10:04.565: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42122f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:03:05.610: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4215d4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:42:33.045: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42078ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:07:30.440: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421498c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:46:12.237: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420df6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27232

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:43:36.893: Couldn't delete ns: "e2e-tests-services-5p14x": namespace e2e-tests-services-5p14x was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-services-5p14x was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:59:49.924: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42100c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:06:14.039: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420338c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34658

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:13:23.143: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42068b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:58:55.379: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ad5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:47:46.361: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e36278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:14:42.107: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c61678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:43:30.901: Couldn't delete ns: "e2e-tests-disruption-c7z0w": namespace e2e-tests-disruption-c7z0w was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-c7z0w was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32644

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:54:07.641: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4204a9678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:15:55.725: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421528278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:52:07.736: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f3ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:07:44.583: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421128c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:47:28.757: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42075ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:08:39.744: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b12278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:55:04.915: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208be278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:02:15.318: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4206f8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37529

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:03:58.949: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421168c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:42:33.496: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420954c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:46:33.055: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ae6c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36183

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:49:43.953: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b0c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:05:59.249: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420cf5678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:359
Expected error:
    <*errors.errorString | 0xc420eba040>: {
        s: "service verification failed for: 10.0.58.83\nexpected [service1-5wd7j service1-chvbn service1-ssl8z]\nreceived []",
    }
    service verification failed for: 10.0.58.83
    expected [service1-5wd7j service1-chvbn service1-ssl8z]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:09:27.882: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421451678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32122

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:01:07.560: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b4e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:12:42.089: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42112ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203c5650>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32830

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:57:27.414: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4215d8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:01:53.942: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420df2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:51:03.019: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421090c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:54:45.059: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420764c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26780

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:50:34.438: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420cb4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:42:35.969: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210a8278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31408

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:49:45.368: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4201e1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:13:30.925: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420171678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:49:02.189: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c44278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:05:34.746: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f2e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:49:42.753: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d47678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:53:48.510: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42121d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33985

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:11:11.721: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4205dac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29521

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 10:04:23.333: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421390278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34520

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 09:44:14.517: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420db8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37914

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/2248/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc420450600>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:52:21.765: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421403678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31400

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:38:51.161: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420fd1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36948

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:56:18.194: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420cf2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31408

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:29:48.453: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f75678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:35:00.924: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4213fa278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:44:23.390: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42090cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:32:25.444: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420744278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:33:07.560: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b16278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:46:18.156: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4206ba278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:52:47.815: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420bc9678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:25:21.333: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4214d8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:30:15.101: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421622c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30981

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc42043c950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:38:55.406: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4200e8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController should create a PodDisruptionBudget {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:38:04.239: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421740278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37017

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:28:47.182: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42010cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:33:34.268: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ef8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:36:19.758: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f5cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:38:20.022: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421452278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:29:15.035: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420744c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36554

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:41:11.206: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e0f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:45:14.068: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208ce278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27195

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:51:58.290: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421122278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:56:18.971: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f59678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:49:30.331: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420eea278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812
Dec  2 15:26:55.226: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:25:46.485: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421472c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:25:34.675: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421222c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26780

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:26:29.561: Couldn't delete ns: "e2e-tests-nettest-dv553": namespace e2e-tests-nettest-dv553 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-nettest-dv553 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:25:25.615: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b42278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32023

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:32:02.645: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42147ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:48:47.953: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4217b2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
waiting for tester pod to start
Expected error:
    <*errors.errorString | 0xc4203ac140>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:110

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:55:12.537: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c6ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:49:09.572: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f1cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:35:43.130: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210ca278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:59:31.590: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421205678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] CronJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:27:03.892: Couldn't delete ns: "e2e-tests-cronjob-bc9jl": namespace e2e-tests-cronjob-bc9jl was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-cronjob-bc9jl was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Expected error:
    <*errors.errorString | 0xc42132c7b0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:5, Replicas:6, UpdatedReplicas:5, AvailableReplicas:3, UnavailableReplicas:3, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63616317679, nsec:0, loc:(*time.Location)(0x3d0efa0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616317679, nsec:0, loc:(*time.Location)(0x3d0efa0)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:5, Replicas:6, UpdatedReplicas:5, AvailableReplicas:3, UnavailableReplicas:3, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63616317679, nsec:0, loc:(*time.Location)(0x3d0efa0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616317679, nsec:0, loc:(*time.Location)(0x3d0efa0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1180

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Dec  2 15:26:43.956: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1993

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:25:22.467: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212d1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29831

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:37:17.924: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421211678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:26:30.291: Couldn't delete ns: "e2e-tests-job-mrjph": namespace e2e-tests-job-mrjph was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-job-mrjph was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:38:32.167: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4201fb678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Dec  2 15:33:37.695: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://bootstrap-e2e-minion-group-0src:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:355
Expected success, but got an error:
    <*errors.errorString | 0xc420fe4200>: {
        s: "failed running \"echo \\\"Dec  2 20:21:27 kernel: [0.000000] permanent error\\\" >> /tmp/node-problem-detector-07d23f7b-b8e6-11e6-80e9-0242ac110009/test.log\": error getting SSH client to jenkins@146.148.107.125:22: 'timed out dialing tcp:146.148.107.125:22' (exit code 0)",
    }
    failed running "echo \"Dec  2 20:21:27 kernel: [0.000000] permanent error\" >> /tmp/node-problem-detector-07d23f7b-b8e6-11e6-80e9-0242ac110009/test.log": error getting SSH client to jenkins@146.148.107.125:22: 'timed out dialing tcp:146.148.107.125:22' (exit code 0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:334

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:28:46.891: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f28278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:35:10.285: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42143cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:40:00.712: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209c1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32646

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:30:04.156: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42160c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:43:05.920: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d27678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29467

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:38:15.576: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42009d678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:31:59.078: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211ea278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:44:21.042: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e10278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:47:37.740: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f51678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35790

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:41:20.484: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421470c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28067 #28378 #32692 #33256 #34654

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:31:32.212: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4213a4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:37:52.921: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b3cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:28:39.636: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421548278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26191

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:26:03.804: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420cdc278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37027

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
    <*errors.errorString | 0xc420f6c0a0>: {
        s: "error while waiting for pods gone rc-light-ctrl: timed out waiting for the condition",
    }
    error while waiting for pods gone rc-light-ctrl: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:309

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:42:03.762: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4200f8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420346a80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32375

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:26:35.876: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42099ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:39:26.122: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b10c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 15:25:34.920: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420635678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/2260/

Multiple broken tests:

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:15:46.167: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4200f6ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:40:06.137: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42076b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34687

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:359
Expected error:
    <*errors.errorString | 0xc420d2d080>: {
        s: "service verification failed for: 10.0.208.102\nexpected [service2-clxdb service2-g7ntx service2-psqhf]\nreceived [service2-16jnr service2-g7ntx service2-hlvbd]",
    }
    service verification failed for: 10.0.208.102
    expected [service2-clxdb service2-g7ntx service2-psqhf]
    received [service2-16jnr service2-g7ntx service2-hlvbd]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:343

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203d4690>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203a7400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #32830

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:46:51.030: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420be2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:36:22.775: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b5eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:31:55.625: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210a98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:43:34.753: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b304f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37027

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:35:44.625: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209644f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:23:23.917: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210d18f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:38:14.839: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42097c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:37:09.999: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ba58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:30:48.548: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4202db8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32087

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:40:34.739: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420fac4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:30:02.361: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f0f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:25:50.270: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ef64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:68
Expected error:
    <*errors.errorString | 0xc420b27e40>: {
        s: "err waiting for DNS replicas to satisfy 2, got 3: timed out waiting for the condition",
    }
    err waiting for DNS replicas to satisfy 2, got 3: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:67

Issues about this test specifically: #36569

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:33:06.212: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e94ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:25:54.802: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f564f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:30:45.485: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212904f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:38:31.413: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210d38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:31:56.793: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c50ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:105
Expected error:
    <*errors.errorString | 0xc420d05c30>: {
        s: "failed to get logs from pod-05f97b0b-b917-11e6-8591-0242ac110008 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-05f97b0b-b917-11e6-8591-0242ac110008)",
    }
    failed to get logs from pod-05f97b0b-b917-11e6-8591-0242ac110008 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-05f97b0b-b917-11e6-8591-0242ac110008)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2190

Issues about this test specifically: #26780

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:22:57.635: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b4aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:28:44.240: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210ec4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:31:41.626: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421432ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:44
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.194.89 --kubeconfig=/workspace/.kube/config log heapster-v1.2.0-1768504394-pdg8h --namespace=kube-system --container=heapster] []  <nil>  Error from server (InternalError): an error on the server (\"unknown\") has prevented the request from succeeding (get pods heapster-v1.2.0-1768504394-pdg8h)\n [] <nil> 0xc420ee5b60 exit status 1 <nil> <nil> true [0xc4200907d8 0xc420090858 0xc4200908c0] [0xc4200907d8 0xc420090858 0xc4200908c0] [0xc420090820 0xc420090898] [0xd14470 0xd14470] 0xc420f2c7e0 <nil>}:\nCommand stdout:\n\nstderr:\nError from server (InternalError): an error on the server (\"unknown\") has prevented the request from succeeding (get pods heapster-v1.2.0-1768504394-pdg8h)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.194.89 --kubeconfig=/workspace/.kube/config log heapster-v1.2.0-1768504394-pdg8h --namespace=kube-system --container=heapster] []  <nil>  Error from server (InternalError): an error on the server ("unknown") has prevented the request from succeeding (get pods heapster-v1.2.0-1768504394-pdg8h)
     [] <nil> 0xc420ee5b60 exit status 1 <nil> <nil> true [0xc4200907d8 0xc420090858 0xc4200908c0] [0xc4200907d8 0xc420090858 0xc4200908c0] [0xc420090820 0xc420090898] [0xd14470 0xd14470] 0xc420f2c7e0 <nil>}:
    Command stdout:
    
    stderr:
    Error from server (InternalError): an error on the server ("unknown") has prevented the request from succeeding (get pods heapster-v1.2.0-1768504394-pdg8h)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2090

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:45:44.951: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c18ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:16:17.451: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c2cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:33:31.475: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e8d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Dec  2 21:37:46.145: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'net/http: TLS handshake timeout'\nTrying to reach: 'https://bootstrap-e2e-minion-group-lwkt:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:15:58.517: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212de4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:22:45.710: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b204f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:19:17.453: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210fa4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:22:23.985: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420efd8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:52:42.915: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209658f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34520

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:27:13.280: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4200ad8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:17:24.638: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42118aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:46:24.615: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b264f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:22:37.697: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420dff8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:16:14.987: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42134b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:17:32.556: Couldn't delete ns: "e2e-tests-disruption-wjsb3": namespace e2e-tests-disruption-wjsb3 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-wjsb3 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #32639

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:23:53.622: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d138f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:20:34.760: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42129eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36183

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:32:30.739: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c8aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:19:03.272: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c404f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:23:47.841: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42068e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35793

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:41:59.719: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421190ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:22:36.579: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a3b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:34:25.795: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4207138f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:19:57.332: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211898f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32023

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:38:34.117: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4216304f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:19:26.138: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4213f78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:32:28.475: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210e98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:29:12.416: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420fe6ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:27:17.148: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420bc04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33987

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:105
Dec  2 21:28:37.612: Ran echo "Hello from $(whoami)@$(hostname)" on 104.198.188.23:22, got error error getting SSH client to jenkins@104.198.188.23:22: 'timed out dialing tcp:104.198.188.23:22', expected <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:79

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:32:22.215: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210acef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:39:39.614: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b68ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36564

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:35:11.789: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c4d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:19:23.170: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4203e4ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:29:01.229: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210744f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:35:21.777: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e718f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:36:39.336: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211bcef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:41:44.493: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4213104f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37500

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:29:16.465: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d958f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:49:34.486: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211be4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37531

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:27:58.982: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ff98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:22:55.568: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d424f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27443 #27835 #28900 #32512

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:42:18.001: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421424ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37914

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:26:43.145: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208e24f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:37:05.916: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208ef8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:37:17.380: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b324f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:23:07.365: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4213baef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:25:31.986: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a4f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29828

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:22:40.367: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421026ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27680

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:20:16.718: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211964f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:49:00.712: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420abeef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37274

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:26:05.742: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421450ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:25:30.986: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ecb8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:32:49.967: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4213638f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:40:22.298: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212024f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:31:42.344: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f9a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28084

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:20:37.037: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209718f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:19:49.060: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421202ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:25:59.959: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42036cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:19:15.550: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212ccef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:36:30.119: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211718f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:46:22.279: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ea84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34827

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:16:13.348: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4213164f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29511 #29987 #30238

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:15:51.059: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208faef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:16:04.052: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b1f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:36:40.201: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4206ac4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:33:46.082: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209e64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:26:27.867: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4202dd8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29521

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420450ba0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:16:17.903: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42110b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:28:45.286: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4214b98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35790

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:35:04.327: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4202e38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:16:07.665: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4213b24f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203a3570>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] CronJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:24:03.767: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4207704f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:27:44.276: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420770ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:32:28.206: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420bd8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34658

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:16:08.875: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421280ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30981

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:30:33.861: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420fc18f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:27:31.260: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f164f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:23:16.837: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210144f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29197 #36289 #36598

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:26:19.806: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209664f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:42:51.885: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42111a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37529

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:43:10.035: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e418f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] CronJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:46:07.013: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42112a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:19:57.384: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211b18f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:26:36.116: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212b38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27232

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:27:32.138: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c4b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:49:17.299: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4216f04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:22:17.619: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212aaef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:28:29.400: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4216ba4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:16:08.671: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c924f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:42:38.035: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211eb8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:38:19.467: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42170a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:29:51.980: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4206518f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:23:09.530: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209bcef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:49:36.826: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211ae4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 21:19:44.819: All nodes 

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/2265/

Multiple broken tests:

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:31:05.612: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d6e768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] CronJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:04:04.247: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42028bb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:01:41.353: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211eb168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:105
Dec  3 00:06:32.922: Ran echo "Hello from $(whoami)@$(hostname)" on 104.198.188.23:22, got error error getting SSH client to jenkins@104.198.188.23:22: 'timed out dialing tcp:104.198.188.23:22', expected <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:79

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:22:58.960: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421381b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29461

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:09:49.932: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a5bb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:14:42.667: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4214fe768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37525

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:28:09.943: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ab3168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33987

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:01:32.233: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ca4768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:01:40.149: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211d9168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:21.988: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b25b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:05:21.383: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420dc0768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:09:09.498: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ba6768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:26:05.243: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b5fb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31873

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:10:54.500: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d09b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:27:30.890: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212d1168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28503

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:14:06.706: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420fe7b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:24:47.627: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421029168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35297

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:14:39.080: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420283b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35579

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:08:19.906: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421381b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:02:09.465: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420925168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:16:27.884: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4202c3b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:06.465: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4206cdb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:02:10.371: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4214f7b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:02:14.338: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4213a3b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:04:32.560: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211c9168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:31:10.095: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420441b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:18:03.149: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420dc9168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:16:17.810: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c23b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:15:43.882: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ec0768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:34.352: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210ff168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:05:10.589: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4217ac768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:34:20.299: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421548768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:02:23.592: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4202adb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:11:31.590: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420c1f168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27245

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:21:37.341: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208cf168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26172

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:12:41.671: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420cd3168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:14:08.678: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4213a1b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:08:02.931: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420feb168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:05:23.932: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e5d168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:05:36.219: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209bbb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:33.149: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d1e768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:08:00.185: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421149168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:07:56.268: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f17168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:07:49.459: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42075bb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:12:12.990: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421211168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:15:53.952: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212e5168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34658

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:22:51.895: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208be768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203d33d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32830

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:04:52.362: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e09168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35790

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:37.222: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d4bb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:15:20.058: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421454768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:24:34.565: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420dd1b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27232

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:17:21.216: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211cb168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:15:20.458: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d28768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29521

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:14:18.706: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42110db68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:59:08.166: Couldn't delete ns: "e2e-tests-services-5vm4z": namespace e2e-tests-services-5vm4z was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-services-5vm4z was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:07:48.642: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ccd168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:19.966: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42089a768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
wait for pod "pod-host-path-test" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203e45a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:13:06.158: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b7c768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:01:20.309: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211a5b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:59:20.465: Couldn't delete ns: "e2e-tests-job-x0wzb": namespace e2e-tests-job-x0wzb was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-job-x0wzb was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #29511 #29987 #30238

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:08:40.201: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d6c768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29467

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:07:48.745: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421431b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:14:30.485: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f09168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:34:54.804: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421135b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31697 #36574

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:04:31.840: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a2f168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34827

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:27:46.781: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b3b168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:11:52.806: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4202edb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:24:42.819: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a19168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:19:39.154: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d49b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42044cf30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32375

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:01:25.816: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4214ddb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:05:08.935: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4210cbb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:24:57.601: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421242768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203e45a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:31.172: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e7db68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:07:40.689: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420899b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:11:26.627: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ccd168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:27:57.780: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4209b1168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:01:19.620: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f7d168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32054 #36010

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:19:36.278: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42106bb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36564

Failed: [k8s.io] CronJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:09:05.151: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4207a6768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:07.461: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420e3fb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36183

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:08:19.356: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420bbc768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27507 #28275

Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:11:06.474: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4214fc768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:04:38.036: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420be1b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController should create a PodDisruptionBudget {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:13.610: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421223b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37017

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:15:31.390: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a5c768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31075 #36286

Failed: [k8s.io] CronJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:16:07.432: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421056768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:11:15.127: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420df2768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:01:52.727: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211c2768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:11:30.440: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208eb168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:20:53.238: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211a5b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:19:09.349: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42069a768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:21:15.346: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4211f9b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:05:09.551: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f3db68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27443 #27835 #28900 #32512

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:19:46.779: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421500768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:18:55.591: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d24768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:24.146: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212ae768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:40.800: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420a8d168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:02:03.062: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f6e768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31635

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:17:52.858: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421451b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:11:20.055: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4214dfb68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:11:50.292: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d3e768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37515

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:11:03.603: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d19b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:08:36.145: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420279b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:172
Dec  2 23:55:47.895: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-gzdq7 to expose endpoints map[pod1:[80] pod2:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1610

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:22:24.630: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc42012b168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:16:04.114: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ed5168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28084

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:05:25.977: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ccf168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34687

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203ad1b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:25:36.877: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc421464768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:36.752: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420f1db68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29831

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:06:36.736: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420664768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:13:06.556: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4208b7168)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737

Failed: [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:19:18.687: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420ca1b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36554

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:17:35.995: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc4212fa768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:58:10.054: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420393b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  2 23:59:06.133: Couldn't delete ns: "e2e-tests-kubectl-gj4hx": namespace e2e-tests-kubectl-gj4hx was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-gj4hx was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:12:51.327: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420701b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:02:44.729: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420b83b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:01:52.756: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420d24768)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37274

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec  3 00:12:12.038: All nodes should be ready after test, Not ready nodes: []*v1.Node{(*v1.Node)(0xc420eb9b68)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/2308/

Multiple broken tests:

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc4209c19b0>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-03 18:46:08 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-03 18:46:40 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-03 18:46:08 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.4 PodIP:10.180.1.95 StartTime:2016-12-03 18:46:08 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2016-12-03 18:46:10 -0800 PST,FinishedAt:2016-12-03 18:46:40 -0800 PST,ContainerID:docker://9d2f05a03b3def5ba271d82fffbf4835eb56d9127e9b31607303a9b96af8b14a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://9d2f05a03b3def5ba271d82fffbf4835eb56d9127e9b31607303a9b96af8b14a}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-03 18:46:08 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-03 18:46:40 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-03 18:46:08 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.4 PodIP:10.180.1.95 StartTime:2016-12-03 18:46:08 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2016-12-03 18:46:10 -0800 PST,FinishedAt:2016-12-03 18:46:40 -0800 PST,ContainerID:docker://9d2f05a03b3def5ba271d82fffbf4835eb56d9127e9b31607303a9b96af8b14a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://9d2f05a03b3def5ba271d82fffbf4835eb56d9127e9b31607303a9b96af8b14a}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:287
Dec  3 18:47:46.154: Expected "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" from server, got ""
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:271

Issues about this test specifically: #27680

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:502
Expected error:
    <*errors.errorString | 0xc4203fc680>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:220

Issues about this test specifically: #32584

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:438
Expected error:
    <*errors.errorString | 0xc4203d3390>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:220

Issues about this test specifically: #28337

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:65
Expected error:
    <*errors.StatusError | 0xc4210e7480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'read tcp 10.240.0.2:41972->10.240.0.3:10250: read: connection reset by peer'\\nTrying to reach: 'https://bootstrap-e2e-minion-group-aemj:10250/logs/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'read tcp 10.240.0.2:41972->10.240.0.3:10250: read: connection reset by peer'\nTrying to reach: 'https://bootstrap-e2e-minion-group-aemj:10250/logs/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'read tcp 10.240.0.2:41972->10.240.0.3:10250: read: connection reset by peer'\nTrying to reach: 'https://bootstrap-e2e-minion-group-aemj:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:326

Issues about this test specifically: #35601

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/2401/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/2419/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/2478/

Multiple broken tests:

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
    <*errors.errorString | 0xc420dda1a0>: {
        s: "failed running \"sudo cat /proc/net/ip_conntrack | grep 'CLOSE_WAIT.*dst=10.240.0.3.*dport=11302' | tail -n 1| awk '{print $3}' \": error getting SSH client to jenkins@35.184.23.30:22: 'dial tcp 35.184.23.30:22: getsockopt: connection timed out' (exit code 0)",
    }
    failed running "sudo cat /proc/net/ip_conntrack | grep 'CLOSE_WAIT.*dst=10.240.0.3.*dport=11302' | tail -n 1| awk '{print $3}' ": error getting SSH client to jenkins@35.184.23.30:22: 'dial tcp 35.184.23.30:22: getsockopt: connection timed out' (exit code 0)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:189

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:359
Expected error:
    <*errors.errorString | 0xc420c242b0>: {
        s: "service verification failed for: 10.0.167.221\nexpected [service1-dmsw7 service1-j0mqp service1-mcgs6]\nreceived []",
    }
    service verification failed for: 10.0.167.221
    expected [service1-dmsw7 service1-j0mqp service1-mcgs6]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:105
Dec  7 02:04:58.894: Ran echo "Hello from $(whoami)@$(hostname)" on 35.184.23.30:22, got error error getting SSH client to jenkins@35.184.23.30:22: 'dial tcp 35.184.23.30:22: getsockopt: connection timed out', expected <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:79

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:242
Expected success, but got an error:
    <*errors.errorString | 0xc420942200>: {
        s: "failed running \"mkdir /tmp/node-problem-detector-08800eaa-bc64-11e6-934d-0242ac110003; > /tmp/node-problem-detector-08800eaa-bc64-11e6-934d-0242ac110003/test.log\": error getting SSH client to jenkins@35.184.23.30:22: 'dial tcp 35.184.23.30:22: getsockopt: connection timed out' (exit code 0)",
    }
    failed running "mkdir /tmp/node-problem-detector-08800eaa-bc64-11e6-934d-0242ac110003; > /tmp/node-problem-detector-08800eaa-bc64-11e6-934d-0242ac110003/test.log": error getting SSH client to jenkins@35.184.23.30:22: 'dial tcp 35.184.23.30:22: getsockopt: connection timed out' (exit code 0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:157

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/2482/

Multiple broken tests:

Failed: Deferred TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #35658

Failed: DumpClusterLogs {e2e.go}

Terminate testing after 15m after 50m0s timeout during dump cluster logs

Issues about this test specifically: #33722 #37578 #37974 #38206

Failed: TearDown {e2e.go}

Terminate testing after 15m after 50m0s timeout during teardown

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 34 leaked resources
+NAME                           MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
+bootstrap-e2e-minion-template  n1-standard-2               2016-12-07T04:16:13.293-08:00
+NAME                        LOCATION       SCOPE  NETWORK        MANAGED  INSTANCES
+bootstrap-e2e-minion-group  us-central1-f  zone   bootstrap-e2e  Yes      3
+NAME                             ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+bootstrap-e2e-master             us-central1-f  n1-standard-1               10.240.0.2   35.184.23.30    RUNNING
+bootstrap-e2e-minion-group-d07i  us-central1-f  n1-standard-2               10.240.0.3   104.154.218.17  RUNNING
+bootstrap-e2e-minion-group-hts4  us-central1-f  n1-standard-2               10.240.0.4   104.154.22.109  RUNNING
+bootstrap-e2e-minion-group-yhfy  us-central1-f  n1-standard-2               10.240.0.5   104.198.75.157  RUNNING
+NAME                                                            ZONE           SIZE_GB  TYPE         STATUS
+bootstrap-e2e-dynamic-pvc-97942fee-bc78-11e6-b16f-42010af00002  us-central1-f  1        pd-standard  READY
+bootstrap-e2e-dynamic-pvc-979990ef-bc78-11e6-b16f-42010af00002  us-central1-f  1        pd-standard  READY
+bootstrap-e2e-dynamic-pvc-979f7104-bc78-11e6-b16f-42010af00002  us-central1-f  1        pd-standard  READY
+bootstrap-e2e-master                                            us-central1-f  20       pd-standard  READY
+bootstrap-e2e-master-pd                                         us-central1-f  20       pd-ssd       READY
+bootstrap-e2e-minion-group-d07i                                 us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-hts4                                 us-central1-f  100      pd-standard  READY
+bootstrap-e2e-minion-group-yhfy                                 us-central1-f  100      pd-standard  READY
+NAME                     REGION       ADDRESS       STATUS
+bootstrap-e2e-master-ip  us-central1  35.184.23.30  IN_USE
+bootstrap-e2e-2f766523-bc77-11e6-b16f-42010af00002  bootstrap-e2e  10.180.0.0/24  us-central1-f/instances/bootstrap-e2e-master             1000
+bootstrap-e2e-39850492-bc77-11e6-b16f-42010af00002  bootstrap-e2e  10.180.1.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-yhfy  1000
+bootstrap-e2e-3af2077c-bc77-11e6-b16f-42010af00002  bootstrap-e2e  10.180.2.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-d07i  1000
+bootstrap-e2e-3af87d7a-bc77-11e6-b16f-42010af00002  bootstrap-e2e  10.180.3.0/24  us-central1-f/instances/bootstrap-e2e-minion-group-hts4  1000
+default-route-75c813b0e5f48d14                      bootstrap-e2e  0.0.0.0/0      default-internet-gateway                                 1000
+default-route-ad863b142fff66b5                      bootstrap-e2e  10.240.0.0/16                                                           1000
+bootstrap-e2e-default-internal-master         bootstrap-e2e  10.0.0.0/8     tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp                        bootstrap-e2e-master
+bootstrap-e2e-default-internal-node           bootstrap-e2e  10.0.0.0/8     tcp:1-65535,udp:1-65535,icmp                                      bootstrap-e2e-minion
+bootstrap-e2e-default-ssh                     bootstrap-e2e  0.0.0.0/0      tcp:22
+bootstrap-e2e-master-etcd                     bootstrap-e2e                 tcp:2380,tcp:2381                           bootstrap-e2e-master  bootstrap-e2e-master
+bootstrap-e2e-master-https                    bootstrap-e2e  0.0.0.0/0      tcp:443                                                           bootstrap-e2e-master
+bootstrap-e2e-minion-all                      bootstrap-e2e  10.180.0.0/14  tcp,udp,icmp,esp,ah,sctp                                          bootstrap-e2e-minion
+bootstrap-e2e-minion-bootstrap-e2e-http-alt   bootstrap-e2e  0.0.0.0/0      tcp:80,tcp:8080                                                   bootstrap-e2e-minion
+bootstrap-e2e-minion-bootstrap-e2e-nodeports  bootstrap-e2e  0.0.0.0/0      tcp:30000-32767,udp:30000-32767                                   bootstrap-e2e-minion

Issues about this test specifically: #33373 #33416 #34060

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Dec 7, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/3182/
Multiple broken tests:

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:221
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'bootstrap-e2e-1924ecfc-c859-11e6-b9aa-0242ac110003' is already being used by 'bootstrap-e2e-minion-group-8xv4', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'bootstrap-e2e-1924ecfc-c859-11e6-b9aa-0242ac110003' is already being used by 'bootstrap-e2e-minion-group-8xv4', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:558

Issues about this test specifically: #29933 #34111 #38765

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-1924ecfc-c859-11e6-b9aa-0242ac110003              us-central1-f  10       pd-ssd       READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:739
Expected error:
    <*errors.errorString | 0xc420ea9b00>: {
        s: "Gave up waiting for GCE PD \"bootstrap-e2e-1924ecfc-c859-11e6-b9aa-0242ac110003\" to detach from \"bootstrap-e2e-minion-group-8xv4\" after 10m0s",
    }
    Gave up waiting for GCE PD "bootstrap-e2e-1924ecfc-c859-11e6-b9aa-0242ac110003" to detach from "bootstrap-e2e-minion-group-8xv4" after 10m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:738

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/3207/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:716
Expected error:
    <*errors.errorString | 0xc420f22a40>: {
        s: "Gave up waiting for GCE PD \"bootstrap-e2e-7f87b865-c8c4-11e6-82a6-0242ac11000b\" to detach from \"bootstrap-e2e-minion-group-3thp\" after 10m0s",
    }
    Gave up waiting for GCE PD "bootstrap-e2e-7f87b865-c8c4-11e6-82a6-0242ac11000b" to detach from "bootstrap-e2e-minion-group-3thp" after 10m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:715

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:221
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'bootstrap-e2e-7f87b865-c8c4-11e6-82a6-0242ac11000b' is already being used by 'bootstrap-e2e-minion-group-3thp', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'bootstrap-e2e-7f87b865-c8c4-11e6-82a6-0242ac11000b' is already being used by 'bootstrap-e2e-minion-group-3thp', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:558

Issues about this test specifically: #29933 #34111 #38765

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-7f87b865-c8c4-11e6-82a6-0242ac11000b              us-central1-f  10       pd-ssd       READY

Issues about this test specifically: #33373 #33416 #34060

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/3511/
Multiple broken tests:

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:221
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'bootstrap-e2e-d548453b-cdec-11e6-a5ed-0242ac11000a' is already being used by 'bootstrap-e2e-minion-group-3z80', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'bootstrap-e2e-d548453b-cdec-11e6-a5ed-0242ac11000a' is already being used by 'bootstrap-e2e-minion-group-3z80', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:558

Issues about this test specifically: #29933 #34111 #38765

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-d548453b-cdec-11e6-a5ed-0242ac11000a              us-central1-f  10       pd-ssd       READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:739
Expected error:
    <*errors.errorString | 0xc42108c660>: {
        s: "Gave up waiting for GCE PD \"bootstrap-e2e-d548453b-cdec-11e6-a5ed-0242ac11000a\" to detach from \"bootstrap-e2e-minion-group-3z80\" after 10m0s",
    }
    Gave up waiting for GCE PD "bootstrap-e2e-d548453b-cdec-11e6-a5ed-0242ac11000a" to detach from "bootstrap-e2e-minion-group-3z80" after 10m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/persistent_volumes.go:738

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/4351/
Multiple broken tests:

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:216
Expected error:
    <*errors.errorString | 0xc420b189f0>: {
        s: "Failed to execute a successful GET within 15m0s, Last response body for http://127.0.0.1/foo, host foo.bar.com:\n\n\ntimed out waiting for the condition\n",
    }
    Failed to execute a successful GET within 15m0s, Last response body for http://127.0.0.1/foo, host foo.bar.com:
    
    
    timed out waiting for the condition
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress_utils.go:892

Issues about this test specifically: #38556

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: TearDown {e2e.go}

signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: DiffResources {e2e.go}

Error: 15 leaked resources
[ instances ]
+NAME                  ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
+bootstrap-e2e-master  us-central1-f  n1-standard-1               10.240.0.2   104.198.50.170  STOPPING
[ disks ]
+bootstrap-e2e-master                                            us-central1-f  20       pd-standard  READY
+bootstrap-e2e-master-pd                                         us-central1-f  20       pd-ssd       READY
[ addresses ]
+NAME                     REGION       ADDRESS         STATUS
+bootstrap-e2e-master-ip  us-central1  104.198.50.170  IN_USE
[ routes ]
+bootstrap-e2e-d779e325-dbf7-11e6-bfd6-42010af00002  bootstrap-e2e  10.180.2.0/24  us-central1-f/instances/bootstrap-e2e-master  1000
+default-route-10553cf5e1128c54                      bootstrap-e2e  10.240.0.0/16                                                1000
[ routes ]
+default-route-5a415a43a5262571                      bootstrap-e2e  0.0.0.0/0      default-internet-gateway                      1000
[ firewall-rules ]
+bootstrap-e2e-default-internal-master  bootstrap-e2e  10.0.0.0/8     tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp                        bootstrap-e2e-master
+bootstrap-e2e-default-internal-node    bootstrap-e2e  10.0.0.0/8     tcp:1-65535,udp:1-65535,icmp                                      bootstrap-e2e-minion
+bootstrap-e2e-default-ssh              bootstrap-e2e  0.0.0.0/0      tcp:22
+bootstrap-e2e-master-etcd              bootstrap-e2e                 tcp:2380,tcp:2381                           bootstrap-e2e-master  bootstrap-e2e-master
+bootstrap-e2e-master-https             bootstrap-e2e  0.0.0.0/0      tcp:443                                                           bootstrap-e2e-master
+bootstrap-e2e-minion-all               bootstrap-e2e  10.180.0.0/14  tcp,udp,icmp,esp,ah,sctp                                          bootstrap-e2e-minion

Issues about this test specifically: #33373 #33416 #34060

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/4839/
Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:271
Expected error:
    <*errors.StatusError | 0xc420c2c600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "rpc error: code = 13 desc = etcdserver: request timed out",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    rpc error: code = 13 desc = etcdserver: request timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:853

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:348
Expected error:
    <*errors.StatusError | 0xc420350b00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "rpc error: code = 13 desc = etcdserver: request timed out",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    rpc error: code = 13 desc = etcdserver: request timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:853

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:369
Failed after 8.051s.
Expected success, but got an error:
    <*errors.StatusError | 0xc42111e180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "rpc error: code = 13 desc = etcdserver: request timed out",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    rpc error: code = 13 desc = etcdserver: request timed out
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:358

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/5221/
Multiple broken tests:

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:100
Expected error:
    <*errors.errorString | 0xc420f536b0>: {
        s: "deployment \"nginx\" never updated with the desired condition and reason: [{Available False 2017-02-05 13:43:13.335005802 -0800 PST 2017-02-05 13:43:13.335006038 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-05 13:43:13.763646927 -0800 PST 2017-02-05 13:43:13.221684026 -0800 PST ReplicaSetUpdated ReplicaSet \"nginx-1638191467\" is progressing.}]",
    }
    deployment "nginx" never updated with the desired condition and reason: [{Available False 2017-02-05 13:43:13.335005802 -0800 PST 2017-02-05 13:43:13.335006038 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-05 13:43:13.763646927 -0800 PST 2017-02-05 13:43:13.221684026 -0800 PST ReplicaSetUpdated ReplicaSet "nginx-1638191467" is progressing.}]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1232

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:274
Test Panicked
/usr/local/go/src/runtime/panic.go:458

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:76
Expected error:
    <*errors.errorString | 0xc420ee6250>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:451

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:88
Expected error:
    <*errors.errorString | 0xc420246d10>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:861

Issues about this test specifically: #29629 #36270 #37462

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/5240/
Multiple broken tests:

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:73
Expected error:
    <*errors.errorString | 0xc420beb220>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:378

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:76
Expected error:
    <*errors.errorString | 0xc4209b5ec0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:451

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:274
Test Panicked
/usr/local/go/src/runtime/panic.go:458

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/5261/
Multiple broken tests:

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:100
Expected error:
    <*errors.errorString | 0xc420b13370>: {
        s: "deployment \"nginx\" never updated with the desired condition and reason: [{Available False 2017-02-06 15:27:07.536299081 -0800 PST 2017-02-06 15:27:07.536299307 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-06 15:27:07.567632198 -0800 PST 2017-02-06 15:27:07.517299453 -0800 PST ReplicaSetUpdated ReplicaSet \"nginx-1638191467\" is progressing.}]",
    }
    deployment "nginx" never updated with the desired condition and reason: [{Available False 2017-02-06 15:27:07.536299081 -0800 PST 2017-02-06 15:27:07.536299307 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-06 15:27:07.567632198 -0800 PST 2017-02-06 15:27:07.517299453 -0800 PST ReplicaSetUpdated ReplicaSet "nginx-1638191467" is progressing.}]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1232

Issues about this test specifically: #31697 #36574 #39785

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:88
Expected error:
    <*errors.errorString | 0xc42106a420>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:861

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:192
Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:180

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/6241/
Multiple broken tests:

Failed: [k8s.io] NodeProblemDetector [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:382
Expected success, but got an error:
    <*errors.errorString | 0xc421029cc0>: {
        s: "failed running \"echo \\\"Feb 26 05:47:30 kernel: [0.000000] temporary error\\\" >> /tmp/node-problem-detector-0ff3594f-fbe7-11e6-bbb7-0242ac110007/test.log;echo \\\"Feb 26 05:47:30 kernel: [0.000000] temporary error\\\" >> /tmp/node-problem-detector-0ff3594f-fbe7-11e6-bbb7-0242ac110007/test.log;echo \\\"Feb 26 05:47:30 kernel: [0.000000] temporary error\\\" >> /tmp/node-problem-detector-0ff3594f-fbe7-11e6-bbb7-0242ac110007/test.log\": <nil> (exit code 1)",
    }
    failed running "echo \"Feb 26 05:47:30 kernel: [0.000000] temporary error\" >> /tmp/node-problem-detector-0ff3594f-fbe7-11e6-bbb7-0242ac110007/test.log;echo \"Feb 26 05:47:30 kernel: [0.000000] temporary error\" >> /tmp/node-problem-detector-0ff3594f-fbe7-11e6-bbb7-0242ac110007/test.log;echo \"Feb 26 05:47:30 kernel: [0.000000] temporary error\" >> /tmp/node-problem-detector-0ff3594f-fbe7-11e6-bbb7-0242ac110007/test.log": <nil> (exit code 1)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:361

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64
Expected error:
    <*errors.StatusError | 0xc420dd6900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'read tcp 10.240.0.2:40980->10.240.0.5:4194: read: connection reset by peer'\\nTrying to reach: 'http://bootstrap-e2e-minion-group-4675:4194/containers/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'read tcp 10.240.0.2:40980->10.240.0.5:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-4675:4194/containers/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'read tcp 10.240.0.2:40980->10.240.0.5:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-4675:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #35297

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Feb 25 22:02:58.031: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/6244/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Feb 25 23:35:48.528: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:92
Feb 25 23:35:54.836: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc420a243b0>: {
        s: "failed to get logs from pod-3b74ba0e-fbf4-11e6-9322-0242ac110009 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-3b74ba0e-fbf4-11e6-9322-0242ac110009)",
    }
    failed to get logs from pod-3b74ba0e-fbf4-11e6-9322-0242ac110009 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-3b74ba0e-fbf4-11e6-9322-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2198

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/6304/
Multiple broken tests:

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:339
Timed out after 300.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
	<*errors.StatusError>: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server could not find the requested resource (get pods pod-configmaps-3f7286bc-fcf1-11e6-9ee7-0242ac110006)", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc4211cea00), Code:404}}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:336

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:266
Expected error:
    <*errors.StatusError | 0xc421256980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"echoserver-sourceip\" not found",
            Reason: "NotFound",
            Details: {
                Name: "echoserver-sourceip",
                Group: "",
                Kind: "pods",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "echoserver-sourceip" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:663

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:418
Expected error:
    <*errors.StatusError | 0xc42089ba80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "pods \"test-pod\" not found",
            Reason: "NotFound",
            Details: {Name: "test-pod", Group: "", Kind: "pods", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    pods "test-pod" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:402

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8151/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:375
Apr 14 22:11:03.279: Entry to guestbook wasn't correctly added in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1732

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:409
Expected error:
    <*errors.errorString | 0xc42043f190>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:387

Issues about this test specifically: #28106 #35197 #37482

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:242
Expected error:
    <*errors.errorString | 0xc4207d35a0>: {
        s: "gave up waiting for pod 'pvc-tester-dz1wc' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pvc-tester-dz1wc' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:395

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:215
Apr 14 22:06:59.222: Timed out waiting for service multi-endpoint-test in namespace e2e-tests-services-vfq1z to expose endpoints map[pod1:[100]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #29831

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8199/
Multiple broken tests:

Failed: [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:401
Expected error:
    <*errors.errorString | 0xc4213f1b50>: {
        s: "expected \"content of file \\\"/etc/projected-configmap-volume/data-1\\\": value-1\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: content of file \"/etc/projected-configmap-volume/data-1\": value-1",
    }
    expected "content of file \"/etc/projected-configmap-volume/data-1\": value-1" in container output: Expected
        <string>: 
    to contain substring
        <string>: content of file "/etc/projected-configmap-volume/data-1": value-1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218

Failed: [k8s.io] Projected should provide container's memory request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:953
Expected error:
    <*errors.errorString | 0xc4213c7960>: {
        s: "expected \"33554432\\n\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: 33554432\n    ",
    }
    expected "33554432\n" in container output: Expected
        <string>: 
    to contain substring
        <string>: 33554432
        
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:68
Expected error:
    <*errors.StatusError | 0xc421701600>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'read tcp 10.128.0.2:54996->10.128.0.4:4194: read: connection reset by peer'\\nTrying to reach: 'http://bootstrap-e2e-minion-group-1j4m:4194/containers/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'read tcp 10.128.0.2:54996->10.128.0.4:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-1j4m:4194/containers/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'read tcp 10.128.0.2:54996->10.128.0.4:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-1j4m:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #37435

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:504
Apr 15 23:07:34.576: Missing "^Accepted client connection$" from log: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:527

Failed: [k8s.io] Secrets should be consumable via the environment [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:425
Expected error:
    <*errors.errorString | 0xc4214d1970>: {
        s: "expected \"data_1=value-1\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: data_1=value-1",
    }
    expected "data_1=value-1" in container output: Expected
        <string>: 
    to contain substring
        <string>: data_1=value-1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:117
Expected error:
    <*errors.errorString | 0xc4213f9920>: {
        s: "expected pod \"pod-9ab9ea7e-226a-11e7-8448-0242ac110007\" success: pod \"pod-9ab9ea7e-226a-11e7-8448-0242ac110007\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-15 23:04:52 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-15 23:04:52 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-15 23:04:51 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.128.0.4 PodIP:10.180.2.69 StartTime:2017-04-15 23:04:52 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:2017-04-15 23:07:08 -0700 PDT,ContainerID:docker://42c2cf53e97308698faa850e3efdd764e22d225dd8c66e0ca3c4d604c4b76d5a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/mounttest-user:0.5 ImageID:docker://sha256:60f713246b6752f3a75e3cf628664df8e11ad5c03d6209bf3ea3e26802aece52 ContainerID:docker://42c2cf53e97308698faa850e3efdd764e22d225dd8c66e0ca3c4d604c4b76d5a}] QOSClass:BestEffort}",
    }
    expected pod "pod-9ab9ea7e-226a-11e7-8448-0242ac110007" success: pod "pod-9ab9ea7e-226a-11e7-8448-0242ac110007" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-15 23:04:52 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-15 23:04:52 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-04-15 23:04:51 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.128.0.4 PodIP:10.180.2.69 StartTime:2017-04-15 23:04:52 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:2017-04-15 23:07:08 -0700 PDT,ContainerID:docker://42c2cf53e97308698faa850e3efdd764e22d225dd8c66e0ca3c4d604c4b76d5a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/mounttest-user:0.5 ImageID:docker://sha256:60f713246b6752f3a75e3cf628664df8e11ad5c03d6209bf3ea3e26802aece52 ContainerID:docker://42c2cf53e97308698faa850e3efdd764e22d225dd8c66e0ca3c4d604c4b76d5a}] QOSClass:BestEffort}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:59
Expected error:
    <*errors.errorString | 0xc420c063f0>: {
        s: "expected \"mode of file \\\"/etc/podname\\\": -r--------\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: mode of file \"/etc/podname\": -r--------",
    }
    expected "mode of file \"/etc/podname\": -r--------" in container output: Expected
        <string>: 
    to contain substring
        <string>: mode of file "/etc/podname": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8210/
Multiple broken tests:

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:118
Expected
    <*errors.errorString | 0xc4203d3b20>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:96

Issues about this test specifically: #31936

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:26.026: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Projected should update annotations on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:917
Expected error:
    <*errors.errorString | 0xc420406030>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:35.188: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #30981

Failed: [k8s.io] PodPreset should not modify the pod on conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:41.568: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:40.727: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted. [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:44.767: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:16.330: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:85
Expected error:
    <*errors.errorString | 0xc4203c2230>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:30:03.476: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:493
Apr 16 04:25:27.546: Failed to open websocket to wss://35.184.170.146:443/api/v1/namespaces/e2e-tests-port-forwarding-xmkvw/pods/pfpod/portforward?ports=80: websocket.Dial wss://35.184.170.146:443/api/v1/namespaces/e2e-tests-port-forwarding-xmkvw/pods/pfpod/portforward?ports=80: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:407

Issues about this test specifically: #40977

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:41.987: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:339
Timed out after 300.000s.
Expected
    <string>: failed to get container status {"docker" "e585b3f4ae263371bf602b34c5e8370ce3d7bf4cdd0c6c6842ee622c9db53121"}: rpc error: code = 14 desc = grpc: the connection is unavailable
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:336

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:68
Expected error:
    <*errors.StatusError | 0xc421788380>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'dial tcp 10.128.0.5:4194: getsockopt: connection refused'\\nTrying to reach: 'http://bootstrap-e2e-minion-group-9tv3:4194/containers/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'dial tcp 10.128.0.5:4194: getsockopt: connection refused'\nTrying to reach: 'http://bootstrap-e2e-minion-group-9tv3:4194/containers/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'dial tcp 10.128.0.5:4194: getsockopt: connection refused'\nTrying to reach: 'http://bootstrap-e2e-minion-group-9tv3:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #37435

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.errorString | 0xc420407d10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:664

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:16.104: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Garbage collector should orphan pods created by rc if delete options say so {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:10.704: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:15.604: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #28503

Failed: [k8s.io] Volumes [Volume] [k8s.io] ConfigMap should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:35.386: Couldn't delete ns: "e2e-tests-volume-4rbbb": namespace e2e-tests-volume-4rbbb was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-volume-4rbbb was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:274

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:30:01.282: Couldn't delete ns: "e2e-tests-container-probe-r6zvf": namespace e2e-tests-container-probe-r6zvf was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-container-probe-r6zvf was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:274

Issues about this test specifically: #28084

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:40.532: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr 16 04:29:24.006: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-9tv3"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #29521

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8365/
Multiple broken tests:

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:382
Timed out after 240.000s.
Expected
    <string>: 
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:379

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:130
Expected error:
    <*errors.errorString | 0xc420c96160>: {
        s: "failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.52.80 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-hm2fl ss-0 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: dial tcp 10.128.0.4:10250: getsockopt: connection refused\n [] <nil> 0xc420b8ed80 exit status 1 <nil> <nil> true [0xc4203d4168 0xc4203d4180 0xc4203d4198] [0xc4203d4168 0xc4203d4180 0xc4203d4198] [0xc4203d4178 0xc4203d4190] [0x1277b60 0x1277b60] 0xc420e46ea0 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: error dialing backend: dial tcp 10.128.0.4:10250: getsockopt: connection refused\n\nerror:\nexit status 1\n",
    }
    failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.52.80 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-hm2fl ss-0 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: dial tcp 10.128.0.4:10250: getsockopt: connection refused
     [] <nil> 0xc420b8ed80 exit status 1 <nil> <nil> true [0xc4203d4168 0xc4203d4180 0xc4203d4198] [0xc4203d4168 0xc4203d4180 0xc4203d4198] [0xc4203d4178 0xc4203d4190] [0x1277b60 0x1277b60] 0xc420e46ea0 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: error dialing backend: dial tcp 10.128.0.4:10250: getsockopt: connection refused
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:125

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:339
Timed out after 300.000s.
Expected
    <string>: 
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:336

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8400/
Multiple broken tests:

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Expected error:
    <*errors.errorString | 0xc420725960>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 14, less than the min required: 18",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 14, less than the min required: 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1038

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Volumes [Volume] [k8s.io] NFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:28:30.803: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
Expected error:
    <*errors.errorString | 0xc421107b30>: {
        s: "error while waiting for pods gone rc-light: timed out waiting for the condition",
    }
    error while waiting for pods gone rc-light: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:381

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:22:55.264: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:26:15.252: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #32936

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:24:46.524: Couldn't delete ns: "e2e-tests-disruption-wfn3t": namespace e2e-tests-disruption-wfn3t was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-wfn3t was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:277

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:22:59.494: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:343
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.165.71 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-s9wg4] []  0xc420d34320 Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc4215ebad0 exit status 1 <nil> <nil> true [0xc420f0e758 0xc420f0e780 0xc420f0e790] [0xc420f0e758 0xc420f0e780 0xc420f0e790] [0xc420f0e760 0xc420f0e778 0xc420f0e788] [0x1277be0 0x1277ce0 0x1277ce0] 0xc4210013e0 <nil>}:\nCommand stdout:\nCreated update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.165.71 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-s9wg4] []  0xc420d34320 Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc4215ebad0 exit status 1 <nil> <nil> true [0xc420f0e758 0xc420f0e780 0xc420f0e790] [0xc420f0e758 0xc420f0e780 0xc420f0e790] [0xc420f0e760 0xc420f0e778 0xc420f0e788] [0x1277be0 0x1277ce0 0x1277ce0] 0xc4210013e0 <nil>}:
    Command stdout:
    Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2120

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:26:34.939: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #29050

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:23:24.612: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:27:46.956: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #27507 #28275 #38583

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:23:34.115: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:28:28.220: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:27:40.571: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Generated release_1_5 clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:213
Expected error:
    <*errors.errorString | 0xc4203ef750>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:200

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:27:02.923: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #27503

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:23:05.011: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:22:54.057: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:26:50.426: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #33985

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:206
Expected error:
    <*strconv.NumError | 0xc420b283c0>: {
        Func: "ParseInt",
        Num: "",
        Err: {s: "invalid syntax"},
    }
    strconv.ParseInt: parsing "": invalid syntax
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:193

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:23:36.154: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #37914

Failed: [k8s.io] Volumes [Volume] [k8s.io] ConfigMap should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:26:12.342: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:386
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:385

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] CronJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:24:06.198: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
wait for pod "pod-d407c13b-25b2-11e7-8acf-0242ac110009" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4203c51c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] ReplicationController should adopt matching pods on creation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:27:04.506: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:26:25.645: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:23:00.201: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:190
Expected error:
    <*errors.errorString | 0xc421168700>: {
        s: "pod \"pvc-tester-v8b4s\" did not exit with Success: pod \"pvc-tester-v8b4s\" failed to reach Success: gave up waiting for pod 'pvc-tester-v8b4s' to be 'success or failure' after 5m0s",
    }
    pod "pvc-tester-v8b4s" did not exit with Success: pod "pvc-tester-v8b4s" failed to reach Success: gave up waiting for pod 'pvc-tester-v8b4s' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:44

Failed: [k8s.io] CronJob should delete successful finished jobs with limit of one successful job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:23:13.388: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:24:12.907: Couldn't delete ns: "e2e-tests-port-forwarding-scb5j": namespace e2e-tests-port-forwarding-scb5j was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-port-forwarding-scb5j was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:277

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:27:49.757: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
Expected error:
    <*errors.errorString | 0xc4203fde40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3974

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:23:37.280: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:32:21.386: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

Failed: [k8s.io] CronJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:24:02.783: Couldn't delete ns: "e2e-tests-cronjob-dchsk": namespace e2e-tests-cronjob-dchsk was not deleted with limit: timed out waiting for the condition, pods remaining: 2, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-cronjob-dchsk was not deleted with limit: timed out waiting for the condition, pods remaining: 2, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:277

Failed: [k8s.io] Projected should provide container's memory limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:935
wait for pod "downwardapi-volume-d0dc60a5-25b2-11e7-b2ca-0242ac110009" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc42038dbf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:124
Apr 20 03:26:08.321: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-hzm0"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:360

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8407/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:154
Timed out after 120.000s.
Expected
    <string>: 
to contain substring
    <string>: builder="foo"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:270
Expected error:
    <*errors.errorString | 0xc421463de0>: {
        s: "pod \"pvc-tester-d7sfn\" did not exit with Success: pod \"pvc-tester-d7sfn\" failed to reach Success: gave up waiting for pod 'pvc-tester-d7sfn' to be 'success or failure' after 5m0s",
    }
    pod "pvc-tester-d7sfn" did not exit with Success: pod "pvc-tester-d7sfn" failed to reach Success: gave up waiting for pod 'pvc-tester-d7sfn' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:269

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:131
Apr 20 07:01:19.100: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-hv83h to expose endpoints map[pod1:[80] pod2:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #26678 #29318

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8450/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Apr 21 05:04:51.904: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.100.2.193:8080/dial?request=hostName&protocol=udp&host=10.100.2.171&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:217

Issues about this test specifically: #32830

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:123
Apr 21 04:59:38.715: pod e2e-tests-container-probe-67njq/liveness-exec - expected number of restarts: 1, found restarts: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:405

Issues about this test specifically: #30264

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:375
Apr 21 05:03:06.594: Entry to guestbook wasn't correctly added in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1757

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8486/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:270
Expected error:
    <*errors.errorString | 0xc42134a5d0>: {
        s: "pod \"pvc-tester-4lwqj\" did not exit with Success: pod \"pvc-tester-4lwqj\" failed to reach Success: gave up waiting for pod 'pvc-tester-4lwqj' to be 'success or failure' after 5m0s",
    }
    pod "pvc-tester-4lwqj" did not exit with Success: pod "pvc-tester-4lwqj" failed to reach Success: gave up waiting for pod 'pvc-tester-4lwqj' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:269

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc4212f0c90>: {
        s: "expected \"perms of file \\\"/test-volume/test-file\\\": -rwxrwxrwx\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: perms of file \"/test-volume/test-file\": -rwxrwxrwx",
    }
    expected "perms of file \"/test-volume/test-file\": -rwxrwxrwx" in container output: Expected
        <string>: 
    to contain substring
        <string>: perms of file "/test-volume/test-file": -rwxrwxrwx
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:156
Timed out after 300.000s.
Expected
    <string>: 
to contain substring
    <string>: value-2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:155

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:515
Apr 21 22:42:06.606: Missing "^Accepted client connection$" from log: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:527

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8580/
Multiple broken tests:

Failed: [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:387
Expected error:
    <*errors.errorString | 0xc4212002b0>: {
        s: "expected \"content of file \\\"/etc/projected-configmap-volume/data-1\\\": value-1\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: content of file \"/etc/projected-configmap-volume/data-1\": value-1",
    }
    expected "content of file \"/etc/projected-configmap-volume/data-1\": value-1" in container output: Expected
        <string>: 
    to contain substring
        <string>: content of file "/etc/projected-configmap-volume/data-1": value-1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:130
Expected error:
    <*errors.errorString | 0xc420fe1610>: {
        s: "failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.246.235 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-3p8b3 ss-1 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: dial tcp 10.128.0.5:10250: getsockopt: connection refused\n [] <nil> 0xc4217a7d70 exit status 1 <nil> <nil> true [0xc4204be1b0 0xc4204be1f0 0xc4204be310] [0xc4204be1b0 0xc4204be1f0 0xc4204be310] [0xc4204be1d0 0xc4204be248] [0x12b1680 0x12b1680] 0xc4212f0780 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: error dialing backend: dial tcp 10.128.0.5:10250: getsockopt: connection refused\n\nerror:\nexit status 1\n",
    }
    failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.246.235 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-3p8b3 ss-1 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: dial tcp 10.128.0.5:10250: getsockopt: connection refused
     [] <nil> 0xc4217a7d70 exit status 1 <nil> <nil> true [0xc4204be1b0 0xc4204be1f0 0xc4204be310] [0xc4204be1b0 0xc4204be1f0 0xc4204be310] [0xc4204be1d0 0xc4204be248] [0x12b1680 0x12b1680] 0xc4212f0780 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: error dialing backend: dial tcp 10.128.0.5:10250: getsockopt: connection refused
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:125

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:260
Expected error:
    <*errors.errorString | 0xc421411ed0>: {
        s: "pod \"pvc-tester-kxtfn\" did not exit with Success: pod \"pvc-tester-kxtfn\" failed to reach Success: gave up waiting for pod 'pvc-tester-kxtfn' to be 'success or failure' after 5m0s",
    }
    pod "pvc-tester-kxtfn" did not exit with Success: pod "pvc-tester-kxtfn" failed to reach Success: gave up waiting for pod 'pvc-tester-kxtfn' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:259

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8641/
Multiple broken tests:

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
Expected error:
    <*errors.errorString | 0xc420cffaf0>: {
        s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwxrwx\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: mode of file \"/test-volume\": dtrwxrwxrwx",
    }
    expected "mode of file \"/test-volume\": dtrwxrwxrwx" in container output: Expected
        <string>: 
    to contain substring
        <string>: mode of file "/test-volume": dtrwxrwxrwx
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:194
Expected error:
    <*errors.errorString | 0xc4209f2930>: {
        s: "Failed to execute a successful GET within 15m0s, Last response body for http://35.188.68.84/foo, host foo.bar.com:\n\n\ntimed out waiting for the condition\n",
    }
    Failed to execute a successful GET within 15m0s, Last response body for http://35.188.68.84/foo, host foo.bar.com:
    
    
    timed out waiting for the condition
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ingress_utils.go:922

Issues about this test specifically: #38556

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:150
Apr 25 00:00:03.343: pod e2e-tests-container-probe-444t5/liveness-exec - expected number of restarts: 0, found restarts: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:405

Issues about this test specifically: #37914

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8643/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:270
Expected error:
    <*errors.errorString | 0xc4212d30d0>: {
        s: "pod \"pvc-tester-xjgvw\" did not exit with Success: pod \"pvc-tester-xjgvw\" failed to reach Success: gave up waiting for pod 'pvc-tester-xjgvw' to be 'success or failure' after 5m0s",
    }
    pod "pvc-tester-xjgvw" did not exit with Success: pod "pvc-tester-xjgvw" failed to reach Success: gave up waiting for pod 'pvc-tester-xjgvw' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:269

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:194
Expected error:
    <*errors.errorString | 0xc421416080>: {
        s: "Failed to execute a successful GET within 15m0s, Last response body for https://35.188.31.115/foo, host foo.bar.com:\n\n\ntimed out waiting for the condition\n",
    }
    Failed to execute a successful GET within 15m0s, Last response body for https://35.188.31.115/foo, host foo.bar.com:
    
    
    timed out waiting for the condition
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ingress_utils.go:922

Issues about this test specifically: #38556

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:68
Expected error:
    <*errors.StatusError | 0xc42176a080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'read tcp 10.128.0.2:42830->10.128.0.3:4194: read: connection reset by peer'\\nTrying to reach: 'http://bootstrap-e2e-minion-group-8krp:4194/containers/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'read tcp 10.128.0.2:42830->10.128.0.3:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-8krp:4194/containers/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'read tcp 10.128.0.2:42830->10.128.0.3:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-8krp:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #37435

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8668/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:125
Timed out after 120.001s.
Expected
    <string>: content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    
to contain substring
    <string>: key3="value3"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124

Issues about this test specifically: #43335

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:382
Timed out after 240.000s.
Expected
    <string>: Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:379

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
Expected error:
    <*errors.errorString | 0xc4204509f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:274

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Apr 25 13:55:20.889: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.100.1.204 8081
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:270

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:123
Apr 25 13:52:39.774: pod e2e-tests-container-probe-pgb9d/liveness-exec - expected number of restarts: 1, found restarts: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:405

Issues about this test specifically: #30264

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8731/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:382
Timed out after 240.000s.
Expected
    <string>: 
to contain substring
    <string>: Error reading file /etc/projected-secret-volumes/create/data-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:349

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:69
Expected error:
    <*errors.errorString | 0xc420fd6e80>: {
        s: "expected \"perms of file \\\"/test-volume/test-file\\\": -rw-r--r--\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: perms of file \"/test-volume/test-file\": -rw-r--r--",
    }
    expected "perms of file \"/test-volume/test-file\": -rw-r--r--" in container output: Expected
        <string>: 
    to contain substring
        <string>: perms of file "/test-volume/test-file": -rw-r--r--
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:215
Apr 26 19:10:12.160: Timed out waiting for service multi-endpoint-test in namespace e2e-tests-services-j87xn to expose endpoints map[pod1:[100] pod2:[101]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #29831

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:131
Apr 26 19:03:57.427: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-twflq to expose endpoints map[pod1:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #26678 #29318

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8733/
Multiple broken tests:

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:123
Apr 26 20:35:01.962: pod e2e-tests-container-probe-kfn6s/liveness-exec - expected number of restarts: 1, found restarts: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:405

Issues about this test specifically: #30264

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Apr 26 20:37:58.496: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.100.3.171:8080/hostName
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:270

Issues about this test specifically: #33631 #33995 #34970

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1164
Apr 26 20:44:49.582: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1145

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:81
Expected error:
    <*errors.errorString | 0xc4203bcc00>: {
        s: "expected \"content of file \\\"/etc/secret-volume/data-1\\\": value-1\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: content of file \"/etc/secret-volume/data-1\": value-1",
    }
    expected "content of file \"/etc/secret-volume/data-1\": value-1" in container output: Expected
        <string>: 
    to contain substring
        <string>: content of file "/etc/secret-volume/data-1": value-1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:274
0 (0; 2m7.253078653s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.2574298s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.254170021s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:460: getsockopt: connection timed out'\nTrying to reach: 'https://10.100.3.179:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:460: getsockopt: connection timed out'
Trying to reach: 'https://10.100.3.179:460/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.255029337s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:1080: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:1080: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.254866593s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.259495309s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:1080: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:1080: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.255566484s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/https:proxy-service-sdlww-zpvcn:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:443: getsockopt: connection timed out'\nTrying to reach: 'https://10.100.3.179:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:443: getsockopt: connection timed out'
Trying to reach: 'https://10.100.3.179:443/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.255986498s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.256169298s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/https:proxy-service-sdlww-zpvcn:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:462: getsockopt: connection timed out'\nTrying to reach: 'https://10.100.3.179:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:462: getsockopt: connection timed out'
Trying to reach: 'https://10.100.3.179:462/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.256504324s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.257939286s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.258222955s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.258761566s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.258911121s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.259202154s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:462: getsockopt: connection timed out'\nTrying to reach: 'https://10.100.3.179:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:462: getsockopt: connection timed out'
Trying to reach: 'https://10.100.3.179:462/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.259290261s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.259785198s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.275200472s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:1080: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:1080: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.283972216s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.28978038s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.293699183s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.298137827s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.303722155s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/https:proxy-service-sdlww-zpvcn:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:460: getsockopt: connection timed out'\nTrying to reach: 'https://10.100.3.179:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:460: getsockopt: connection timed out'
Trying to reach: 'https://10.100.3.179:460/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.327204825s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.328432625s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:1080: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:1080: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:1080/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.329070879s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:460: getsockopt: connection timed out'\nTrying to reach: 'https://10.100.3.179:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:460: getsockopt: connection timed out'
Trying to reach: 'https://10.100.3.179:460/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.332980679s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:160/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.337296645s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.342695515s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:462: getsockopt: connection timed out'\nTrying to reach: 'https://10.100.3.179:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:462: getsockopt: connection timed out'
Trying to reach: 'https://10.100.3.179:462/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.349997292s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:462: getsockopt: connection timed out'\nTrying to reach: 'https://10.100.3.179:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:462: getsockopt: connection timed out'
Trying to reach: 'https://10.100.3.179:462/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.356106706s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:162: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:162/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.364288052s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:460: getsockopt: connection timed out'\nTrying to reach: 'https://10.100.3.179:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:460: getsockopt: connection timed out'
Trying to reach: 'https://10.100.3.179:460/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.372428376s): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:80: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:80/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:80: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:80/' }],RetryAfterSeconds:0,} Code:503}
0 (0; 2m7.393380589s): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'\nTrying to reach: 'http://10.100.3.179:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp 10.100.3.179:160: getsockopt: connection timed out'
Trying to reach: 'http://10.100.3.179:160/' }],RetryAfterSeconds:0,} Code:503}
1 (503; 50.617958ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 56.898085ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 56.949148ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 57.025436ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 57.095842ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 57.261806ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 57.083005ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 57.142003ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 57.191953ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 57.304163ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 57.366627ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 57.383774ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (0; 57.752608ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :160: getsockopt: connection refused'\nTrying to reach: 'http://:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :160: getsockopt: connection refused'
Trying to reach: 'http://:160/' }],RetryAfterSeconds:0,} Code:503}
1 (503; 64.649002ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 64.659295ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 64.634168ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 64.799264ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 64.631676ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (503; 64.848744ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
1 (0; 64.971784ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :80: getsockopt: connection refused'\nTrying to reach: 'http://:80/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :80: getsockopt: connection refused'
Trying to reach: 'http://:80/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 65.277612ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :162: getsockopt: connection refused'\nTrying to reach: 'http://:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :162: getsockopt: connection refused'
Trying to reach: 'http://:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 65.441364ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :162: getsockopt: connection refused'\nTrying to reach: 'http://:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :162: getsockopt: connection refused'
Trying to reach: 'http://:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 65.759427ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :160: getsockopt: connection refused'\nTrying to reach: 'http://:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :160: getsockopt: connection refused'
Trying to reach: 'http://:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 66.090408ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :1080: getsockopt: connection refused'\nTrying to reach: 'http://:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :1080: getsockopt: connection refused'
Trying to reach: 'http://:1080/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 71.986358ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :1080: getsockopt: connection refused'\nTrying to reach: 'http://:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :1080: getsockopt: connection refused'
Trying to reach: 'http://:1080/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 73.427393ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :162: getsockopt: connection refused'\nTrying to reach: 'http://:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :162: getsockopt: connection refused'
Trying to reach: 'http://:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 73.725903ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :1080: getsockopt: connection refused'\nTrying to reach: 'http://:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :1080: getsockopt: connection refused'
Trying to reach: 'http://:1080/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 73.963502ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :160: getsockopt: connection refused'\nTrying to reach: 'http://:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :160: getsockopt: connection refused'
Trying to reach: 'http://:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 74.119355ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :162: getsockopt: connection refused'\nTrying to reach: 'http://:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :162: getsockopt: connection refused'
Trying to reach: 'http://:162/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 74.570655ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :160: getsockopt: connection refused'\nTrying to reach: 'http://:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :160: getsockopt: connection refused'
Trying to reach: 'http://:160/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 74.769549ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :1080: getsockopt: connection refused'\nTrying to reach: 'http://:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :1080: getsockopt: connection refused'
Trying to reach: 'http://:1080/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 75.122938ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/https:proxy-service-sdlww-zpvcn:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :460: getsockopt: connection refused'\nTrying to reach: 'https://:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :460: getsockopt: connection refused'
Trying to reach: 'https://:460/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 75.302874ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/https:proxy-service-sdlww-zpvcn:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :462: getsockopt: connection refused'\nTrying to reach: 'https://:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :462: getsockopt: connection refused'
Trying to reach: 'https://:462/' }],RetryAfterSeconds:0,} Code:503}
1 (0; 110.432873ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/https:proxy-service-sdlww-zpvcn:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:User "system:anonymous" cannot get path "/". Reason:Forbidden Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse User "system:anonymous" cannot get path "/". }],RetryAfterSeconds:0,} Code:403}
2 (0; 19.027328ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :162: getsockopt: connection refused'\nTrying to reach: 'http://:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :162: getsockopt: connection refused'
Trying to reach: 'http://:162/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 19.445269ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :160: getsockopt: connection refused'\nTrying to reach: 'http://:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :160: getsockopt: connection refused'
Trying to reach: 'http://:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 20.5064ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :160: getsockopt: connection refused'\nTrying to reach: 'http://:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :160: getsockopt: connection refused'
Trying to reach: 'http://:160/' }],RetryAfterSeconds:0,} Code:503}
2 (503; 36.676254ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (0; 37.2249ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:160/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :160: getsockopt: connection refused'\nTrying to reach: 'http://:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :160: getsockopt: connection refused'
Trying to reach: 'http://:160/' }],RetryAfterSeconds:0,} Code:503}
2 (503; 37.2096ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 37.226917ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 37.379666ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 37.547374ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 37.312407ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (0; 40.103635ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :1080: getsockopt: connection refused'\nTrying to reach: 'http://:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :1080: getsockopt: connection refused'
Trying to reach: 'http://:1080/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 40.383904ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :162: getsockopt: connection refused'\nTrying to reach: 'http://:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :162: getsockopt: connection refused'
Trying to reach: 'http://:162/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 40.757885ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :1080: getsockopt: connection refused'\nTrying to reach: 'http://:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :1080: getsockopt: connection refused'
Trying to reach: 'http://:1080/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 40.944675ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :1080: getsockopt: connection refused'\nTrying to reach: 'http://:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :1080: getsockopt: connection refused'
Trying to reach: 'http://:1080/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 41.273632ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :162: getsockopt: connection refused'\nTrying to reach: 'http://:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :162: getsockopt: connection refused'
Trying to reach: 'http://:162/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 41.621111ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :1080: getsockopt: connection refused'\nTrying to reach: 'http://:1080/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :1080: getsockopt: connection refused'
Trying to reach: 'http://:1080/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 41.981833ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:162/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :162: getsockopt: connection refused'\nTrying to reach: 'http://:162/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :162: getsockopt: connection refused'
Trying to reach: 'http://:162/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 42.01163ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :160: getsockopt: connection refused'\nTrying to reach: 'http://:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :160: getsockopt: connection refused'
Trying to reach: 'http://:160/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 43.354556ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/proxy-service-sdlww-zpvcn/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :80: getsockopt: connection refused'\nTrying to reach: 'http://:80/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :80: getsockopt: connection refused'
Trying to reach: 'http://:80/' }],RetryAfterSeconds:0,} Code:503}
2 (503; 47.308505ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 51.142137ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 51.218281ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 51.378564ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:443/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 51.206333ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 51.455709ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 51.262121ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:80/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 51.185355ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 51.474905ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 51.637823ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:81/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 51.224787ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (503; 51.252029ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
2 (0; 51.929772ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/https:proxy-service-sdlww-zpvcn:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :460: getsockopt: connection refused'\nTrying to reach: 'https://:460/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :460: getsockopt: connection refused'
Trying to reach: 'https://:460/' }],RetryAfterSeconds:0,} Code:503}
2 (0; 54.040983ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/https:proxy-service-sdlww-zpvcn:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:User "system:anonymous" cannot get path "/". Reason:Forbidden Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse User "system:anonymous" cannot get path "/". }],RetryAfterSeconds:0,} Code:403}
2 (0; 54.077283ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/https:proxy-service-sdlww-zpvcn:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :462: getsockopt: connection refused'\nTrying to reach: 'https://:462/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :462: getsockopt: connection refused'
Trying to reach: 'https://:462/' }],RetryAfterSeconds:0,} Code:503}
3 (503; 33.327988ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
3 (0; 38.475807ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/pods/http:proxy-service-sdlww-zpvcn:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'dial tcp :160: getsockopt: connection refused'\nTrying to reach: 'http://:160/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'dial tcp :160: getsockopt: connection refused'
Trying to reach: 'http://:160/' }],RetryAfterSeconds:0,} Code:503}
3 (503; 38.599604ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
3 (503; 38.501728ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
3 (503; 38.706371ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
3 (503; 38.671837ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
3 (503; 38.579887ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
3 (503; 38.804578ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
3 (503; 38.689631ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/http:proxy-service-sdlww:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
3 (503; 38.794482ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
3 (503; 38.674065ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/proxy-service-sdlww:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse unknown }],RetryAfterSeconds:0,} Code:503}
3 (503; 38.691872ms): path /api/v1/namespaces/e2e-tests-proxy-pp3md/services/https:proxy-service-sdlww:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("unknown") has prevented the request from succeeding Reason:InternalErr

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8755/
Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:501
Apr 27 07:03:14.178: Pod did not start running: pod ran to completion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:271

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:344
Apr 27 07:12:35.441: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:304

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: TearDown {e2e.go}

error during ./hack/e2e-internal/e2e-down.sh: signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:92
Expected error:
    <*errors.errorString | 0xc420279180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:246

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:488
Apr 27 07:03:14.729: Pod did not start running: pod ran to completion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:215

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8756/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc42029b580>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:49
Expected error:
    <*errors.errorString | 0xc420c0df10>: {
        s: "expected \"downwardapi-volume-c5a8c7de-2b59-11e7-b58a-0242ac110007\\n\" in container output: Expected\n    <string>: failed to open log file \"/var/log/pods/c5a9b112-2b59-11e7-83b1-42010a800002/client-container_0.log\": open /var/log/pods/c5a9b112-2b59-11e7-83b1-42010a800002/client-container_0.log: no such file or directory\nto contain substring\n    <string>: downwardapi-volume-c5a8c7de-2b59-11e7-b58a-0242ac110007\n    ",
    }
    expected "downwardapi-volume-c5a8c7de-2b59-11e7-b58a-0242ac110007\n" in container output: Expected
        <string>: failed to open log file "/var/log/pods/c5a9b112-2b59-11e7-83b1-42010a800002/client-container_0.log": open /var/log/pods/c5a9b112-2b59-11e7-83b1-42010a800002/client-container_0.log: no such file or directory
    to contain substring
        <string>: downwardapi-volume-c5a8c7de-2b59-11e7-b58a-0242ac110007
        
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:85
Expected error:
    <*errors.errorString | 0xc4202b9220>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:98
Expected error:
    <*errors.errorString | 0xc4202bbde0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:86

Issues about this test specifically: #26191

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8832/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:215
Apr 28 21:23:34.453: Timed out waiting for service multi-endpoint-test in namespace e2e-tests-services-f0b6w to expose endpoints map[pod1:[100] pod2:[101]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #29831

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Apr 28 21:26:35.678: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.100.3.181:8080/dial?request=hostName&protocol=http&host=10.100.2.145&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:217

Issues about this test specifically: #32375

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:382
Timed out after 240.001s.
Expected
    <string>: Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying
    
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:379

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8870/
Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:501
Apr 29 16:27:08.123: Missing "Accepted client connection" from log: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:527

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:42
Apr 29 16:26:37.242: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:146

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:130
Expected error:
    <*errors.errorString | 0xc4214ca2f0>: {
        s: "failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.215.229 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-mpstk ss-0 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: dial tcp 10.128.0.5:10250: getsockopt: connection refused\n [] <nil> 0xc4210a98c0 exit status 1 <nil> <nil> true [0xc42060e258 0xc42060e278 0xc42060e2a8] [0xc42060e258 0xc42060e278 0xc42060e2a8] [0xc42060e268 0xc42060e298] [0x182d750 0x182d750] 0xc420e21ce0 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: error dialing backend: dial tcp 10.128.0.5:10250: getsockopt: connection refused\n\nerror:\nexit status 1\n",
    }
    failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.215.229 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-mpstk ss-0 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: dial tcp 10.128.0.5:10250: getsockopt: connection refused
     [] <nil> 0xc4210a98c0 exit status 1 <nil> <nil> true [0xc42060e258 0xc42060e278 0xc42060e2a8] [0xc42060e258 0xc42060e278 0xc42060e2a8] [0xc42060e268 0xc42060e298] [0x182d750 0x182d750] 0xc420e21ce0 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: error dialing backend: dial tcp 10.128.0.5:10250: getsockopt: connection refused
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:125

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:514

Issues about this test specifically: #32371

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8894/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:92
Expected error:
    <*errors.errorString | 0xc420498770>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:471

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:156
Timed out after 300.000s.
Expected
    <string>: content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    content of file "/etc/configmap-volume/data-1": value-1
    
to contain substring
    <string>: value-2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:155

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:123
Apr 30 03:37:52.778: pod e2e-tests-container-probe-58lxf/liveness-exec - expected number of restarts: 1, found restarts: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:405

Issues about this test specifically: #30264

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Apr 30 03:40:56.952: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.100.2.127:8080/dial?request=hostName&protocol=udp&host=10.100.1.142&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:217

Issues about this test specifically: #32830

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8948/
Multiple broken tests:

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:409
Expected error:
    <*errors.errorString | 0xc4202de080>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:387

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
Expected error:
    <*errors.StatusError | 0xc421154c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'dial tcp 10.128.0.3:10250: getsockopt: connection refused'\\nTrying to reach: 'https://bootstrap-e2e-minion-group-bgmm:10250/logs/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'dial tcp 10.128.0.3:10250: getsockopt: connection refused'\nTrying to reach: 'https://bootstrap-e2e-minion-group-bgmm:10250/logs/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'dial tcp 10.128.0.3:10250: getsockopt: connection refused'\nTrying to reach: 'https://bootstrap-e2e-minion-group-bgmm:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #36242

Failed: [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:387
Expected error:
    <*errors.errorString | 0xc420278140>: {
        s: "expected \"content of file \\\"/etc/projected-configmap-volume/data-1\\\": value-1\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: content of file \"/etc/projected-configmap-volume/data-1\": value-1",
    }
    expected "content of file \"/etc/projected-configmap-volume/data-1\": value-1" in container output: Expected
        <string>: 
    to contain substring
        <string>: content of file "/etc/projected-configmap-volume/data-1": value-1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2196

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
May  1 06:05:46.691: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.100.1.13 8081
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:270

Issues about this test specifically: #35283 #36867

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/8973/
Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:501
May  1 18:32:54.308: Missing "Accepted client connection" from log: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:527

Failed: [k8s.io] Secrets optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:341
Timed out after 240.000s.
Expected
    <string>: Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/secret-volumes/create/data-1: open /etc/secret-volumes/create/data-1: no such file or directory, retrying
    
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:338

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster-logging/sd.go:63
Failed to ingest logs
Expected error:
    <*errors.errorString | 0xc42049e020>: {
        s: "some logs were ingested for 0 pods out of 1",
    }
    some logs were ingested for 0 pods out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster-logging/sd.go:62

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241 #43425

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/9039/
Multiple broken tests:

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:131
May  3 00:55:21.275: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-bcqxc to expose endpoints map[pod1:[80] pod2:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Volumes [Volume] [k8s.io] NFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:134
Failed to wait client pod terminated: gave up waiting for pod 'nfs-client' to be 'terminated due to deadline exceeded' after 5m0s
Expected error:
    <*errors.errorString | 0xc4207eca30>: {
        s: "gave up waiting for pod 'nfs-client' to be 'terminated due to deadline exceeded' after 5m0s",
    }
    gave up waiting for pod 'nfs-client' to be 'terminated due to deadline exceeded' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume_util.go:207

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:57
Expected error:
    <*errors.StatusError | 0xc421921100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'dial tcp 10.128.0.4:10250: getsockopt: connection refused'\\nTrying to reach: 'https://bootstrap-e2e-minion-group-6zds:10250/metrics'\") has prevented the request from succeeding (get nodes bootstrap-e2e-minion-group-6zds:10250)",
            Reason: "InternalError",
            Details: {
                Name: "bootstrap-e2e-minion-group-6zds:10250",
                Group: "",
                Kind: "nodes",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'dial tcp 10.128.0.4:10250: getsockopt: connection refused'\nTrying to reach: 'https://bootstrap-e2e-minion-group-6zds:10250/metrics'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'dial tcp 10.128.0.4:10250: getsockopt: connection refused'\nTrying to reach: 'https://bootstrap-e2e-minion-group-6zds:10250/metrics'") has prevented the request from succeeding (get nodes bootstrap-e2e-minion-group-6zds:10250)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:55

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:127
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:514

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/9049/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
May  3 05:35:31.914: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.100.2.154:8080/dial?request=hostName&protocol=http&host=10.100.1.132&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:217

Issues about this test specifically: #32375

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:131
May  3 05:33:35.285: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-m3421 to expose endpoints map[pod1:[80] pod2:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:67
Expected error:
    <*errors.StatusError | 0xc421747500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'dial tcp 10.128.0.3:10250: getsockopt: connection refused'\\nTrying to reach: 'https://bootstrap-e2e-minion-group-90gw:10250/logs/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'dial tcp 10.128.0.3:10250: getsockopt: connection refused'\nTrying to reach: 'https://bootstrap-e2e-minion-group-90gw:10250/logs/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'dial tcp 10.128.0.3:10250: getsockopt: connection refused'\nTrying to reach: 'https://bootstrap-e2e-minion-group-90gw:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #35422

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/9062/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
May  3 11:58:16.644: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.100.2.177:8080/hostName
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:270

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:132
Expected error:
    <*errors.errorString | 0xc420f0e690>: {
        s: "failed to get logs from var-expansion-238039f5-3032-11e7-a0d7-0242ac110006 for dapi-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods var-expansion-238039f5-3032-11e7-a0d7-0242ac110006)",
    }
    failed to get logs from var-expansion-238039f5-3032-11e7-a0d7-0242ac110006 for dapi-container: an error on the server ("unknown") has prevented the request from succeeding (get pods var-expansion-238039f5-3032-11e7-a0d7-0242ac110006)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2202

Issues about this test specifically: #28503

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:130
Expected error:
    <*errors.errorString | 0xc420464710>: {
        s: "failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.89.128 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-qrwgz ss-1 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: dial tcp 10.128.0.3:10250: getsockopt: connection refused\n [] <nil> 0xc421655b90 exit status 1 <nil> <nil> true [0xc42076c040 0xc42076c060 0xc42076c078] [0xc42076c040 0xc42076c060 0xc42076c078] [0xc42076c058 0xc42076c070] [0x182d7e0 0x182d7e0] 0xc420ccad20 <nil>}:\nCommand stdout:\n\nstderr:\nError from server: error dialing backend: dial tcp 10.128.0.3:10250: getsockopt: connection refused\n\nerror:\nexit status 1\n",
    }
    failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.89.128 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-qrwgz ss-1 -- /bin/sh -c ls -idlh /data] []  <nil>  Error from server: error dialing backend: dial tcp 10.128.0.3:10250: getsockopt: connection refused
     [] <nil> 0xc421655b90 exit status 1 <nil> <nil> true [0xc42076c040 0xc42076c060 0xc42076c078] [0xc42076c040 0xc42076c060 0xc42076c078] [0xc42076c058 0xc42076c070] [0x182d7e0 0x182d7e0] 0xc420ccad20 <nil>}:
    Command stdout:
    
    stderr:
    Error from server: error dialing backend: dial tcp 10.128.0.3:10250: getsockopt: connection refused
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:108

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
May  3 11:57:12.000: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'dial tcp 10.128.0.3:10250: getsockopt: connection refused'\nTrying to reach: 'https://bootstrap-e2e-minion-group-v7tk:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/9078/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:488
May  3 19:52:46.471: Missing "Accepted client connection" from log: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:527

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:65
Expected error:
    <*errors.errorString | 0xc4203c7310>: {
        s: "expected \"[/ep-2 override arguments]\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: [/ep-2 override arguments]",
    }
    expected "[/ep-2 override arguments]" in container output: Expected
        <string>: 
    to contain substring
        <string>: [/ep-2 override arguments]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2202

Issues about this test specifically: #29467

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:382
Timed out after 240.000s.
Expected
    <string>: 
to contain substring
    <string>: Error reading file /etc/projected-secret-volumes/create/data-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:349

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:194
Expected error:
    <*errors.errorString | 0xc420ae60c0>: {
        s: "Failed to execute a successful GET within 15m0s, Last response body for https://35.188.136.41/foo, host foobar.com:\n\n\ntimed out waiting for the condition\n",
    }
    Failed to execute a successful GET within 15m0s, Last response body for https://35.188.136.41/foo, host foobar.com:
    
    
    timed out waiting for the condition
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ingress_utils.go:922

Issues about this test specifically: #38556

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/9110/
Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:423
May  4 11:12:59.663: Failed waiting for stateful set status.replicas updated to 0: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:382

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93
Expected error:
    <*errors.errorString | 0xc421376f20>: {
        s: "expected \"perms of file \\\"/test-volume\\\": -rwxrwxrwx\" in container output: Expected\n    <string>: failed to open log file \"/var/log/pods/ec21cb98-30f3-11e7-bbeb-42010a800002/test-container_0.log\": open /var/log/pods/ec21cb98-30f3-11e7-bbeb-42010a800002/test-container_0.log: no such file or directory\nto contain substring\n    <string>: perms of file \"/test-volume\": -rwxrwxrwx",
    }
    expected "perms of file \"/test-volume\": -rwxrwxrwx" in container output: Expected
        <string>: failed to open log file "/var/log/pods/ec21cb98-30f3-11e7-bbeb-42010a800002/test-container_0.log": open /var/log/pods/ec21cb98-30f3-11e7-bbeb-42010a800002/test-container_0.log: no such file or directory
    to contain substring
        <string>: perms of file "/test-volume": -rwxrwxrwx
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2202

Failed: TearDown {e2e.go}

error during ./hack/e2e-internal/e2e-down.sh: signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:414
Expected error:
    <*errors.errorString | 0xc4212aa410>: {
        s: "failed to get logs from pod-projected-configmaps-d11ae51d-30f3-11e7-a857-0242ac110007 for projected-configmap-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-projected-configmaps-d11ae51d-30f3-11e7-a857-0242ac110007)",
    }
    failed to get logs from pod-projected-configmaps-d11ae51d-30f3-11e7-a857-0242ac110007 for projected-configmap-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-projected-configmaps-d11ae51d-30f3-11e7-a857-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2202

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:66
Expected error:
    <*errors.errorString | 0xc42080fd70>: {
        s: "expected \"content of file \\\"/etc/configmap-volume/path/to/data-2\\\": value-2\" in container output: Expected\n    <string>: failed to open log file \"/var/log/pods/e75047cf-30f3-11e7-bbeb-42010a800002/configmap-volume-test_0.log\": open /var/log/pods/e75047cf-30f3-11e7-bbeb-42010a800002/configmap-volume-test_0.log: no such file or directory\nto contain substring\n    <string>: content of file \"/etc/configmap-volume/path/to/data-2\": value-2",
    }
    expected "content of file \"/etc/configmap-volume/path/to/data-2\": value-2" in container output: Expected
        <string>: failed to open log file "/var/log/pods/e75047cf-30f3-11e7-bbeb-42010a800002/configmap-volume-test_0.log": open /var/log/pods/e75047cf-30f3-11e7-bbeb-42010a800002/configmap-volume-test_0.log: no such file or directory
    to contain substring
        <string>: content of file "/etc/configmap-volume/path/to/data-2": value-2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2202

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:92
Expected error:
    <*errors.errorString | 0xc4213a8bb0>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:422

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:130
May  4 11:12:44.939: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:304

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:501
May  4 11:02:58.685: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:189

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 93 days. Closing this Issue. Please reopen if you would like to work towards merging this change, if/when the Issue is ready for the next round of review.

cc @k8s-merge-robot @rmmh

You can add 'keep-open' label to prevent this from happening again, or add a comment to keep it open another 90 days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

7 participants