Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gke-staging: broken test run #42792

Closed
k8s-github-robot opened this issue Mar 9, 2017 · 30 comments
Closed

ci-kubernetes-e2e-gke-staging: broken test run #42792

k8s-github-robot opened this issue Mar 9, 2017 · 30 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2339/
Multiple broken tests:

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc421ba0d80>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-08 22:30:05 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-08 22:30:37 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-08 22:30:05 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.72.3.86 StartTime:2017-03-08 22:30:05 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4210b1f80} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://79288da563f1f91c2c1d9acbe77fc265f2c0eb352b73684a5ec9b5b7bce84518}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-08 22:30:05 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-08 22:30:37 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-08 22:30:05 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.72.3.86 StartTime:2017-03-08 22:30:05 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4210b1f80} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://79288da563f1f91c2c1d9acbe77fc265f2c0eb352b73684a5ec9b5b7bce84518}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812
Mar  8 20:17:55.300: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421d9ec60>: {
        s: "error restarting nodes: error running gcloud [compute --project=k8s-jkns-e2e-gke-staging instances reset gke-bootstrap-e2e-default-pool-fa335dd1-l164 gke-bootstrap-e2e-default-pool-fa335dd1-qcl4 gke-bootstrap-e2e-default-pool-fa335dd1-tp3g --zone=us-central1-f]; got error exit status 1, stdout \"\", stderr \"Updated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-fa335dd1-qcl4].\\nUpdated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-fa335dd1-tp3g].\\nERROR: (gcloud.compute.instances.reset) Some requests did not succeed:\\n - The resource 'projects/k8s-jkns-e2e-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-fa335dd1-l164' is not ready\\n\\n\"\nstdout: \nstderr: ",
    }
    error restarting nodes: error running gcloud [compute --project=k8s-jkns-e2e-gke-staging instances reset gke-bootstrap-e2e-default-pool-fa335dd1-l164 gke-bootstrap-e2e-default-pool-fa335dd1-qcl4 gke-bootstrap-e2e-default-pool-fa335dd1-tp3g --zone=us-central1-f]; got error exit status 1, stdout "", stderr "Updated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-fa335dd1-qcl4].\nUpdated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-fa335dd1-tp3g].\nERROR: (gcloud.compute.instances.reset) Some requests did not succeed:\n - The resource 'projects/k8s-jkns-e2e-gke-staging/zones/us-central1-f/instances/gke-bootstrap-e2e-default-pool-fa335dd1-l164' is not ready\n\n"
    stdout: 
    stderr: 
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Previous issues for this suite: #37182 #38100

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Mar 9, 2017
@calebamiles calebamiles modified the milestone: v1.6 Mar 9, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2346/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113
Expected error:
    <*errors.errorString | 0xc4228fe060>: {
        s: "expected pod \"pod-68ab1b51-0666-11e7-8d14-0242ac11000b\" success: gave up waiting for pod 'pod-68ab1b51-0666-11e7-8d14-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-68ab1b51-0666-11e7-8d14-0242ac11000b" success: gave up waiting for pod 'pod-68ab1b51-0666-11e7-8d14-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34226

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109
Expected error:
    <*errors.errorString | 0xc42316da10>: {
        s: "expected pod \"pod-6f0c8780-0668-11e7-8d14-0242ac11000b\" success: gave up waiting for pod 'pod-6f0c8780-0668-11e7-8d14-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-6f0c8780-0668-11e7-8d14-0242ac11000b" success: gave up waiting for pod 'pod-6f0c8780-0668-11e7-8d14-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37071

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85
Expected error:
    <*errors.errorString | 0xc4228c7aa0>: {
        s: "expected pod \"pod-b01db574-0657-11e7-8d14-0242ac11000b\" success: gave up waiting for pod 'pod-b01db574-0657-11e7-8d14-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-b01db574-0657-11e7-8d14-0242ac11000b" success: gave up waiting for pod 'pod-b01db574-0657-11e7-8d14-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34658

@grodrigues3
Copy link
Contributor

seems test-infra related. Moving out of the milestone.

@grodrigues3 grodrigues3 modified the milestones: next-candidate, v1.6 Mar 13, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2353/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Expected error:
    <*errors.StatusError | 0xc421674a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-nx0nt/services/rs-ctrl/proxy/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100\\\"\") has prevented the request from succeeding (post services rs-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rs-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-nx0nt/services/rs-ctrl/proxy/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-nx0nt/services/rs-ctrl/proxy/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100\"") has prevented the request from succeeding (post services rs-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203d2f00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.StatusError | 0xc42265b880>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-prestop-02bvx/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-prestop-02bvx/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-prestop-02bvx/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceaccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:171
Mar 13 11:08:50.112: Failed to delete pod "downwardapi-volume-19980b3e-0818-11e7-8532-0242ac110007": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-74559/pods/downwardapi-volume-19980b3e-0818-11e7-8532-0242ac110007\"") has prevented the request from succeeding (delete pods downwardapi-volume-19980b3e-0818-11e7-8532-0242ac110007)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:118

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2376/
Multiple broken tests:

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 00:56:31.547: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422ef8278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29467

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4213cdd30>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b2523531-nchh gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:07:05 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b2523531-nchh            gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:16 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b2523531-nchh gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:07:05 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b2523531-nchh            gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:16 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 20 23:30:58.872: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422643678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38308

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 01:10:37.548: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422ef8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 01:20:14.089: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421564278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34520

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 01:32:42.965: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221f4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 00:16:12.642: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422644278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 00:43:47.090: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224cd678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 20 23:37:58.323: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4228a9678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36948

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 00:59:44.189: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221ecc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37027

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4208c3570>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b2523531-nchh gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:07:05 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b2523531-nchh            gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:16 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b2523531-nchh gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:07:05 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b2523531-nchh            gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:16 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 01:07:17.698: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422864278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 01:03:47.731: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f17678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 00:40:34.992: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421026278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 00:37:08.228: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f24c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37259

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 20 23:53:39.113: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42171a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34658

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc42038cd00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc423149660>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-b2523531-nchh boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-b2523531-nchh boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 20 23:34:43.092: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420729678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 20 23:50:24.691: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221f4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35256

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4229c73a0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b2523531-nchh gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:07:05 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b2523531-nchh            gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:16 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b2523531-nchh gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:07:05 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b2523531-nchh            gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:16 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42131c8e0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b2523531-nchh gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:07:05 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b2523531-nchh            gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:16 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b2523531-nchh gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:07:05 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b2523531-nchh            gke-bootstrap-e2e-default-pool-b2523531-nchh Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:15 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:16 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-20 23:06:21 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar 21 00:10:03.240: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421199678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428 #40256

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2377/
Multiple broken tests:

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc420376e80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42162c270>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc4228ec0e0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Expected error:
    <*errors.errorString | 0xc420376e80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #32087

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc420a34e20>: {
        s: "Only 0 pods started out of 3",
    }
    Only 0 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4218bccf0>: {
        s: "Only 0 pods started out of 3",
    }
    Only 0 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc420376e80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected error:
    <*errors.errorString | 0xc420376e80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:73

Issues about this test specifically: #28346

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:437
Expected error:
    <*errors.errorString | 0xc420376e80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #33985

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc42192d4f0>: {
        s: "Only 0 pods started out of 3",
    }
    Only 0 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42290c010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc420a320e0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc420376e80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2406/
Multiple broken tests:

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <*errors.errorString | 0xc4203d2e30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1749

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:141
Expected error:
    <*errors.errorString | 0xc4203d2e30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:103

Issues about this test specifically: #28984 #33827 #36917

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:170
Failed waiting for pod wrapped-volume-race-a94511e4-1572-11e7-989a-0242ac110004-c3071 to enter running state
Expected error:
    <*errors.errorString | 0xc4203d2e30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:383

Issues about this test specifically: #32945

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:547
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.172.211 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-tg0kv run run-test --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] []  0xc421b6c960 Waiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, status is Pending, pod ready: false\nWaiting for pod e2e-tests-kubectl-tg0kv/run-test-289hd to be running, statu

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2411/
Multiple broken tests:

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  1 06:18:49.064: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215f3400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26191

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  1 06:09:05.487: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421190000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28283

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  1 06:15:30.400: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4228daa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37439

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  1 06:22:10.014: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42114d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  1 05:46:22.418: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206f7400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  1 05:52:47.412: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422473400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37515

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  1 06:02:29.976: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422d96000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  1 05:43:03.552: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421827400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34520

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc422b257c0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-50954ac2-g09l boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-50954ac2-g09l boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  1 05:55:59.126: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422480000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  1 05:59:12.632: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421826a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2415/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421cf0120>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc420f5d670>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Apr  2 12:12:24.412: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc42038cc50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422305ae0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422f85930>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e8f870>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Apr  2 11:45:37.562: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420eb7c10>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Apr  2 06:03:54.780: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Apr  2 07:10:58.125: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422b57c40>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc422ca8b60>: {
        s: "Only 0 pods started out of 3",
    }
    Only 0 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc42038cc50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:437
Expected error:
    <*errors.errorString | 0xc42038cc50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #33985

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Apr  2 06:59:18.350: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421bf3ab0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc422558000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:128
Expected error:
    <*errors.errorString | 0xc42038cc50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/events.go:73

Issues about this test specifically: #28346

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc420778000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ecfe70>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Apr  2 09:38:04.516: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc42252e000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422e97a40>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42177e2e0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc4218eadb0>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422a5b5f0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Apr  2 06:36:39.919: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422605ae0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d19db0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-40wck gke-bootstrap-e2e-default-pool-a184a7a9-l0xp Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:31:30 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-02 05:30:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2424/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1087
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.19.129 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-4kpkt] []  <nil> Created e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0\nScaling up e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0 up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc421632f00 exit status 1 <nil> <nil> true [0xc42003b560 0xc42003b578 0xc42003b590] [0xc42003b560 0xc42003b578 0xc42003b590] [0xc42003b570 0xc42003b588] [0x9747f0 0x9747f0] 0xc42190ade0 <nil>}:\nCommand stdout:\nCreated e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0\nScaling up e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0 up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.19.129 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-4kpkt] []  <nil> Created e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0
    Scaling up e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0 up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc421632f00 exit status 1 <nil> <nil> true [0xc42003b560 0xc42003b578 0xc42003b590] [0xc42003b560 0xc42003b578 0xc42003b590] [0xc42003b570 0xc42003b588] [0x9747f0 0x9747f0] 0xc42190ade0 <nil>}:
    Command stdout:
    Created e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0
    Scaling up e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-686d1fbc83b609040379e584f26cdca0 up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:169

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc42183a0a0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 8, less than the min required: 9",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 8, less than the min required: 9
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1253
Apr  5 07:18:06.647: Failed getting pod e2e-test-nginx-pod: Timeout while waiting for pods with labels "run=e2e-test-nginx-pod" to be running
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1231

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Apr  5 08:37:39.946: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38083

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234
starting pod liveness-http in namespace e2e-tests-container-probe-jwx07
Expected error:
    <*errors.errorString | 0xc420386d50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:364

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:501
Apr  5 07:07:26.517: Pod test-pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:456

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc421f28000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:68
Expected error:
    <*errors.errorString | 0xc423228050>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:371

Issues about this test specifically: #27232

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Apr  5 08:59:52.281: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38439

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2435/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422aa6130>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc4212881b0>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc4225d4cc0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc420471160>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc420471160>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42391a9e0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Expected error:
    <*errors.errorString | 0xc420471160>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #32087

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc421a728b0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc421e5e100>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc421dbe020>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2439/
Multiple broken tests:

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 14:24:14.329: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d1c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:03:05.805: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42108e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:16:51.155: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211c4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29831

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 09:11:42.203: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420310c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:43:26.772: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ed2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 11:45:48.444: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218cec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:57:41.635: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420310c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:38:35.370: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421034c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421cc6fb0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:04:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]\nkube-dns-autoscaler-395097547-xm0mk                                gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:03:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt            gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:04:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]
    kube-dns-autoscaler-395097547-xm0mk                                gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:03:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt            gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:26:30.479: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421014c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:13:33.564: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218dac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203d0f50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32830

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 09:51:03.758: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fa3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37515

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 14:51:10.488: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212f8c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30981

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 11:34:58.597: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218da278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37439

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 14:11:46.230: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219e6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 09:58:48.938: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421252278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421805150>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 12:11:40.674: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421796c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:18:45.648: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219e6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 14:30:51.025: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c18c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:29:57.837: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42107a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 09:15:15.746: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218b2278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:36:40.897: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d1ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27360 #28096 #29615 #31775 #35750

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 12:26:54.422: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b43678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 09:40:51.945: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219ba278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:58:19.185: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e22c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:35:18.020: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a92278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 11:24:48.082: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42108ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 14:08:32.620: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213b6c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 12:54:36.846: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211c4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29461

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:22:12.957: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214b4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:11:18.678: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213ad678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:50:01.710: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4204c3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32646

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 14:34:40.223: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42170a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:46:42.447: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421456c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 07:59:28.599: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ed3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 11:27:59.892: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ed2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 11:42:05.166: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421015678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:09:32.654: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420edf678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37529

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b8c820>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:04:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]\nkube-dns-autoscaler-395097547-xm0mk                                gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:03:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt            gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:04:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]
    kube-dns-autoscaler-395097547-xm0mk                                gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:03:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt            gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:16:12.259: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421107678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 14:47:49.788: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421838c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29710

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:55:05.977: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ab6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35277

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc4203d0f50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34317

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:02:42.418: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f10c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35793

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:26:45.165: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421162278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 11:38:15.760: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211c4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 12:51:23.109: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213d0278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 14:04:43.093: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225de278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b9dd30>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:04:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]\nkube-dns-autoscaler-395097547-xm0mk                                gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:03:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt            gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:04:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]
    kube-dns-autoscaler-395097547-xm0mk                                gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 07:03:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:59:30 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt            gke-bootstrap-e2e-default-pool-fa04b9e0-bjkt Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:43 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-07 06:58:41 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 11:52:08.190: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420234c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:06:19.131: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421839678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc4203d0f50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33887

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc4203d0f50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:20:15.552: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211f3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 07:53:03.841: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c70278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 11:31:30.116: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421297678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33985

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:09:27.580: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421175678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 14:38:14.569: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42283c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:54:15.529: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42186a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 12:48:07.349: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421839678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38439

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:44:02.919: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214a1678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27957

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:40:20.788: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218ce278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 11:49:01.912: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c18c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:25:23.982: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218cec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 11:05:00.064: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421636c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:06:11.116: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a3ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 12:43:35.195: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214a8278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:33:33.328: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214ef678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 07:56:17.481: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f1cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:51:56.173: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ee3678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 09:44:07.107: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f1cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 09:08:23.745: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421136c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:20:04.482: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214c4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 09:54:17.265: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42122c278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 09:47:50.539: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42140f678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:28:50.695: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42145a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 12:14:56.106: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b98c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #42724

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:47:14.058: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421789678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26780

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 11:01:30.856: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215ba278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 08:12:57.287: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c71678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 12:59:50.312: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213bac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:32:04.230: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421803678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:51:36.091: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421838c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:47:23.140: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218d0c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 14:44:37.995: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421954c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36554

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 13:37:03.338: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f80278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: [k8s.io] DisruptionController should create a PodDisruptionBudget {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  7 10:14:30.278: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b44c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37017

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2441/
Multiple broken tests:

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421c82000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc42311a010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc422d6c250>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc4210a6600>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:19, Replicas:9, UpdatedReplicas:6, AvailableReplicas:7, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627255917, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627255917, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627255982, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627255982, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-3101951307\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:19, Replicas:9, UpdatedReplicas:6, AvailableReplicas:7, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627255917, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627255917, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63627255982, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63627255982, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-3101951307\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Apr  8 03:21:15.607: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc421c82120>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2443/
Multiple broken tests:

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:07:59.236: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a36ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 22:03:15.349: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42243cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 18:26:48.252: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422354ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 19:40:29.675: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422994ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:46:17.106: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a378f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 22:06:29.284: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42065aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc420350e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:12:36.789: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225feef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38174

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 19:57:24.885: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c66ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 17:33:15.183: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421df44f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 20:06:43.375: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42294d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 17:25:32.192: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221344f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Expected error:
    <*errors.errorString | 0xc420350e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 17:49:17.243: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421690ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37027

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4223bcd40>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:01:18.780: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f918f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 19:02:01.998: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214bd8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29710

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 22:13:14.358: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42236cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 19:46:54.843: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ea0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Expected error:
    <*errors.errorString | 0xc420350e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34064

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 19:28:02.875: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420cb0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc4210a10c0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-45747b42-nxkp boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-45747b42-nxkp boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:56:33.849: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223964f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 18:18:09.509: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ee44f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27195

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 22:29:13.152: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c7cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42299c240>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421afa890>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 17:29:55.064: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214bcef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42228c170>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 20:13:29.595: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b964f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33985

Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 22:25:45.700: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e4f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26224 #34354

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 19:24:41.768: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4228758f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 20:01:15.842: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42177eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 17:52:30.628: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4228bb8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 18:45:37.933: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216784f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 17:36:34.792: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42065aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 22:32:42.904: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42177e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30441

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 20:51:32.526: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222d38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27957

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 20:33:17.953: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221424f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37259

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:43:02.062: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216c8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 19:12:15.713: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211044f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 20:58:07.086: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42024b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30851

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:59:47.079: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b3cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 19:05:19.294: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216c58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 20:46:05.087: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4228ba4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27232

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:04:47.301: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220c0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 17:22:07.645: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42157d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 22:10:03.064: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422482ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:22:20.528: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224664f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 19:54:03.139: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ce58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36794

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:39:50.505: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4227f78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421debc50>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 20:54:53.538: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42243d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:19:05.744: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222eaef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4215656a0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-45747b42-nxkp gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:45:02 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-h8qg3                                 gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:48 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 16:54:40 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-45747b42-nxkp            gke-bootstrap-e2e-default-pool-45747b42-nxkp Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-08 14:43:44 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 20:10:02.737: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222d24f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 18:21:22.734: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220278f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:52:45.926: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42291f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 19:43:41.470: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421322ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  8 21:15:48.808: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421feaef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32371

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2446/
Multiple broken tests:

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 19:02:57.279: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221c78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 21:26:13.371: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ec04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 21:12:28.666: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422182ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:13:26.146: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4227498f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:19:02.934: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4227598f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28297 #37101 #38201

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 18:58:47.398: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c444f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203d2e70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 17:26:51.960: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421604ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:03:28.702: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215844f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27245

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 21:15:44.139: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cc98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 17:22:35.371: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209c18f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33987

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 21:05:58.142: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421dde4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 19:14:55.420: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42252cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 19:59:45.456: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d68ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 18:08:19.851: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421decef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37027

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 17:47:19.005: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42196aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35579

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 19:45:53.133: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fbf8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 18:27:33.122: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b6f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 19:37:58.851: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ae8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 21:48:32.456: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b198f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:25:44.075: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f5c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:22:16.682: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211b64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37525

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 18:05:07.688: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b058f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 19:42:33.904: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221064f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38174

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 21:19:03.482: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209e4ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:32:12.975: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4227278f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 17:40:42.421: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213224f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 17:58:28.653: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42119aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:06:36.864: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421210ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33008

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:38:34.599: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b68ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 17:19:20.183: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ebcef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 18:33:57.221: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218278f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 21:00:45.552: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42133cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:29:01.689: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42186b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29831

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:44:24.034: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225d84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32945

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 17:50:52.692: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212484f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38511

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:10:12.988: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42139d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32023

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421cb0720>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-e5027115-t086 boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-e5027115-t086 boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421cfb6f0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-e5027115-t086 gke-bootstrap-e2e-default-pool-e5027115-t086 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:58 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-e5027115-t086            gke-bootstrap-e2e-default-pool-e5027115-t086 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-e5027115-t086 gke-bootstrap-e2e-default-pool-e5027115-t086 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:58 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-e5027115-t086            gke-bootstrap-e2e-default-pool-e5027115-t086 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 19:52:47.665: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225cf8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421834d00>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-e5027115-t086 gke-bootstrap-e2e-default-pool-e5027115-t086 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:58 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-e5027115-t086            gke-bootstrap-e2e-default-pool-e5027115-t086 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-e5027115-t086 gke-bootstrap-e2e-default-pool-e5027115-t086 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:58 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-e5027115-t086            gke-bootstrap-e2e-default-pool-e5027115-t086 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:10 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:11 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-09 16:58:17 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 18:24:21.662: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42269cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc4203d2e70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34317

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 18:11:53.108: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213604f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 21:41:40.426: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a098f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37479

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 21:09:11.522: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210f64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 17:31:04.235: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42206f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 19:49:16.551: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4209e44f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 19:18:22.039: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217f98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 18:30:39.580: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421268ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 21:51:45.916: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421caf8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 20:35:24.353: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221c78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36109

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 17:44:08.772: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219124f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 21:45:20.969: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4208898f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr  9 17:54:24.089: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ba64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29521

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2448/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc4224a2b30>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc421206000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc420413cd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4229fa140>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4213b6ab0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:437
Expected error:
    <*errors.errorString | 0xc420413cd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #33985

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc420413cd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:47
Expected error:
    <*errors.errorString | 0xc420413cd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #32087

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc420413cd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc4231fa130>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc422566540>: {
        s: "Only 1 pods started out of 3",
    }
    Only 1 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2459/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:246
Apr 13 18:45:54.831: Pods on node gke-bootstrap-e2e-default-pool-cdfb768f-4b72 are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:182

Issues about this test specifically: #36794

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421528ab0>: {
        s: "5 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-wqzx2    gke-bootstrap-e2e-default-pool-cdfb768f-4b7b Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  }]\nkube-dns-2185667875-9c392             gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]\nkube-dns-autoscaler-395097547-2plhl   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]\nkubernetes-dashboard-3543765157-z4k1b gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]\nl7-default-backend-2234341178-sp3xh   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]\n",
    }
    5 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-wqzx2    gke-bootstrap-e2e-default-pool-cdfb768f-4b7b Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  }]
    kube-dns-2185667875-9c392             gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]
    kube-dns-autoscaler-395097547-2plhl   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]
    kubernetes-dashboard-3543765157-z4k1b gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]
    l7-default-backend-2234341178-sp3xh   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4210e9650>: {
        s: "5 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-wqzx2    gke-bootstrap-e2e-default-pool-cdfb768f-4b7b Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  }]\nkube-dns-2185667875-9c392             gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]\nkube-dns-autoscaler-395097547-2plhl   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]\nkubernetes-dashboard-3543765157-z4k1b gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]\nl7-default-backend-2234341178-sp3xh   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]\n",
    }
    5 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-wqzx2    gke-bootstrap-e2e-default-pool-cdfb768f-4b7b Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  }]
    kube-dns-2185667875-9c392             gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]
    kube-dns-autoscaler-395097547-2plhl   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]
    kubernetes-dashboard-3543765157-z4k1b gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]
    l7-default-backend-2234341178-sp3xh   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42072a9e0>: {
        s: "5 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-wqzx2    gke-bootstrap-e2e-default-pool-cdfb768f-4b7b Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  }]\nkube-dns-2185667875-9c392             gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]\nkube-dns-autoscaler-395097547-2plhl   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]\nkubernetes-dashboard-3543765157-z4k1b gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]\nl7-default-backend-2234341178-sp3xh   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]\n",
    }
    5 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-wqzx2    gke-bootstrap-e2e-default-pool-cdfb768f-4b7b Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  }]
    kube-dns-2185667875-9c392             gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]
    kube-dns-autoscaler-395097547-2plhl   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]
    kubernetes-dashboard-3543765157-z4k1b gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]
    l7-default-backend-2234341178-sp3xh   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc420fc8dd0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-wqzx2 gke-bootstrap-e2e-default-pool-cdfb768f-4b7b Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-wqzx2 gke-bootstrap-e2e-default-pool-cdfb768f-4b7b Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Apr 13 18:51:52.764: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4209e19e0>: {
        s: "5 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                   NODE                                         PHASE   GRACE CONDITIONS\nheapster-v1.2.0.1-1382115970-wqzx2    gke-bootstrap-e2e-default-pool-cdfb768f-4b7b Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  }]\nkube-dns-2185667875-9c392             gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]\nkube-dns-autoscaler-395097547-2plhl   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]\nkubernetes-dashboard-3543765157-z4k1b gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]\nl7-default-backend-2234341178-sp3xh   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]\n",
    }
    5 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                   NODE                                         PHASE   GRACE CONDITIONS
    heapster-v1.2.0.1-1382115970-wqzx2    gke-bootstrap-e2e-default-pool-cdfb768f-4b7b Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:17 -0700 PDT ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:35 -0700 PDT  }]
    kube-dns-2185667875-9c392             gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubedns dnsmasq dnsmasq-metrics healthz]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]
    kube-dns-autoscaler-395097547-2plhl   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:19 -0700 PDT  }]
    kubernetes-dashboard-3543765157-z4k1b gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [kubernetes-dashboard]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]
    l7-default-backend-2234341178-sp3xh   gke-bootstrap-e2e-default-pool-cdfb768f-4b72 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:27:18 -0700 PDT ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-13 18:26:18 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc42038abd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Apr 13 19:53:50.216: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2467/
Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc42038add0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:74
Expected error:
    <*errors.errorString | 0xc4227a8400>: {
        s: "want pod 'test-webserver-ed0b657c-22d8-11e7-9f26-0242ac110008' on 'gke-bootstrap-e2e-default-pool-5a3aa815-0nhb' to be 'Running' but was 'Pending'",
    }
    want pod 'test-webserver-ed0b657c-22d8-11e7-9f26-0242ac110008' on 'gke-bootstrap-e2e-default-pool-5a3aa815-0nhb' to be 'Running' but was 'Pending'
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:56

Issues about this test specifically: #29521

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Apr 16 12:37:56.962: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1995

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc42038add0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Apr 16 15:47:38.673: Node gke-bootstrap-e2e-default-pool-5a3aa815-pn68 did not become ready within 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:291

Issues about this test specifically: #37259

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2471/
Multiple broken tests:

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Apr 17 16:12:56.286: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc4203d3430>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36178

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 16:44:03.048: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e67400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29224 #32008 #37564

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 16:59:41.021: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fbd400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:44:36.496: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c6a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 17:04:02.756: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214daa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38439

Failed: [k8s.io] Daemon set [Serial] should run and stop simple daemon {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:148
Expected error:
    <*errors.errorString | 0xc4203d3430>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_set.go:122

Issues about this test specifically: #31428

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 21:47:07.275: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218be000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29461

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:41:07.837: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211b7400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203d3430>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 22:03:44.929: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211d4a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 16:56:01.403: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42158ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 21:50:23.993: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421395400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28346

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 22:14:04.740: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a28a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31502 #32947 #38646

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 17:15:01.878: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b60000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:26:20.627: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210e4a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 22:32:58.596: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42155ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 21:10:42.540: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420111400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421128590>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 22:17:24.268: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210fe000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 22:06:56.239: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420111400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 21:43:49.650: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f2d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29657

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 21:37:10.786: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421756a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 17:24:42.256: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421137400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 22:21:46.287: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217de000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-79197759-f14m\" is not ready yet",
        },
    ]
    Resource usage on node "gke-bootstrap-e2e-default-pool-79197759-f14m" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26784 #28384 #31935 #33023 #39880 #43412

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:15:32.550: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210b6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37914

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 17:18:13.203: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420883400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 21:14:06.921: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420666a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:22:33.089: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213a9400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:37:26.097: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218faa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 22:00:12.883: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420882a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26780

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:10:19.258: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217df400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 21:30:42.394: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4208cb400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Pods should support remote command execution over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 18:35:24.311: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42106c000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38308

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:29:37.960: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420cb2a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:03:06.894: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42161ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34658

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 22:29:44.069: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42117d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36948

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 18:42:22.735: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d64a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42117a150>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 21:53:48.335: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210dca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 17:21:26.737: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214fe000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 21:07:27.592: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42107e000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 21:33:56.965: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a47400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #42724

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 17:11:44.414: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b68a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:47:47.859: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421257400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29050

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 21:40:29.492: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421389400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 17:08:13.908: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b7a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28084

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 22:10:11.756: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421174000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:06:20.265: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421267400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28503

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 22:26:32.217: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421972000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38174

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 20:33:57.622: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420aac000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2479/
Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4201879d0>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5dead69c-j3x5 gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]\nkube-dns-2185667875-sf9tf                                          gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nkube-dns-autoscaler-395097547-mm5w7                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-5dead69c-j3x5            gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]\nkubernetes-dashboard-3543765157-tv5wj                              gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nl7-default-backend-2234341178-8zts8                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:01 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5dead69c-j3x5 gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]
    kube-dns-2185667875-sf9tf                                          gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    kube-dns-autoscaler-395097547-mm5w7                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-5dead69c-j3x5            gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]
    kubernetes-dashboard-3543765157-tv5wj                              gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    l7-default-backend-2234341178-8zts8                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:01 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42136d250>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5dead69c-j3x5 gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]\nkube-dns-2185667875-sf9tf                                          gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nkube-dns-autoscaler-395097547-mm5w7                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-5dead69c-j3x5            gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]\nkubernetes-dashboard-3543765157-tv5wj                              gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nl7-default-backend-2234341178-8zts8                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:01 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5dead69c-j3x5 gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]
    kube-dns-2185667875-sf9tf                                          gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    kube-dns-autoscaler-395097547-mm5w7                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-5dead69c-j3x5            gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]
    kubernetes-dashboard-3543765157-tv5wj                              gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    l7-default-backend-2234341178-8zts8                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:01 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 10:10:47.487: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a69400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 09:39:25.887: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f75400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 11:01:16.063: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214af400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37361 #37919

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 08:08:20.522: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421853400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 10:54:49.198: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cfea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27360 #28096 #29615 #31775 #35750

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 11:04:46.266: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bdc000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 10:07:38.456: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218caa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 10:25:46.936: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422583400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 06:54:47.461: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220f7400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33985

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:34:13.181: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215e4a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 08:01:39.475: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225a6000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28416 #31055 #33627 #33725 #34206 #37456

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:30:55.315: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422cd4000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36948

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 06:36:53.714: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4226fc000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37274

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:46:39.111: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e2a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35790

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 06:58:19.364: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4204bea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:20:11.346: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f4f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 10:29:16.017: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e08a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:27:43.115: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206a6000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc422294060>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 06:51:28.932: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421714000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 08:28:11.734: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224ee000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32371

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 11:11:14.046: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fa6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32949

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 06:45:03.791: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42223b400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 10:32:29.856: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e2a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:13:33.463: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421875400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 08:05:06.965: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422802a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:23:30.537: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224ee000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29657

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 11:08:00.484: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e2a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc42038e0e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34317

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 10:02:09.552: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214fd400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 10:35:44.574: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420dce000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4226e7080>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5dead69c-j3x5 gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]\nkube-dns-2185667875-sf9tf                                          gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nkube-dns-autoscaler-395097547-mm5w7                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-5dead69c-j3x5            gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]\nkubernetes-dashboard-3543765157-tv5wj                              gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nl7-default-backend-2234341178-8zts8                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:01 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5dead69c-j3x5 gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]
    kube-dns-2185667875-sf9tf                                          gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    kube-dns-autoscaler-395097547-mm5w7                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-5dead69c-j3x5            gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]
    kubernetes-dashboard-3543765157-tv5wj                              gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    l7-default-backend-2234341178-8zts8                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:01 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 06:33:40.687: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4226a8000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:16:52.748: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422103400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc42038e0e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36178

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:49:54.200: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f10a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:58:23.831: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42157f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38254

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 08:11:47.952: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225a6a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:07:08.532: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422d0b400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 09:03:15.638: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422821400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 06:48:17.419: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4200ee000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c81eb0>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5dead69c-j3x5 gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]\nkube-dns-2185667875-sf9tf                                          gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nkube-dns-autoscaler-395097547-mm5w7                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-5dead69c-j3x5            gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]\nkubernetes-dashboard-3543765157-tv5wj                              gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]\nl7-default-backend-2234341178-8zts8                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:01 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-5dead69c-j3x5 gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:49 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]
    kube-dns-2185667875-sf9tf                                          gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    kube-dns-autoscaler-395097547-mm5w7                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-5dead69c-j3x5            gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:39:28 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 06:14:08 -0700 PDT  }]
    kubernetes-dashboard-3543765157-tv5wj                              gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:59 -0700 PDT  }]
    l7-default-backend-2234341178-8zts8                                gke-bootstrap-e2e-default-pool-5dead69c-j3x5 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:41:01 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-20 03:40:58 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 07:53:06.205: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42210c000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 06:41:37.289: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420cdf400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32639

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 20 09:54:53.273: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cfea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37479

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2483/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc42038abe0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #36178

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ea17c0>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4230b7080>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421935a00>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421943550>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ddb970>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Expected error:
    <*errors.StatusError | 0xc42105e080>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server has asked for the client to provide credentials (get deployments.extensions test-deployment)",
            Reason: "Unauthorized",
            Details: {
                Name: "test-deployment",
                Group: "extensions",
                Kind: "deployments",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Unauthorized",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 401,
        },
    }
    the server has asked for the client to provide credentials (get deployments.extensions test-deployment)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:257

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421770f40>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42131ba20>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421dcf690>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-pqnxx is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2485/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4230cd0b0>: {
        s: "error while stopping RC: service2: Get https://35.188.86.168/api/v1/namespaces/e2e-tests-services-qfxvs/replicationcontrollers/service2: unexpected EOF",
    }
    error while stopping RC: service2: Get https://35.188.86.168/api/v1/namespaces/e2e-tests-services-qfxvs/replicationcontrollers/service2: unexpected EOF
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225359b0>: {
        s: "Namespace e2e-tests-services-qfxvs is active",
    }
    Namespace e2e-tests-services-qfxvs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42236c130>: {
        s: "Namespace e2e-tests-services-qfxvs is active",
    }
    Namespace e2e-tests-services-qfxvs is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2493/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Apr 24 20:42:32.429: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
Apr 24 19:23:40.699: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37774

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Apr 24 17:08:01.999: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38254

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc4227de000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:319

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc421954030>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:477

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:188
Expected error:
    <*errors.errorString | 0xc4203acc40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:169

Issues about this test specifically: #42724

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
Apr 24 19:56:43.608: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37361 #37919

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:310
Expected error:
    <*errors.errorString | 0xc4203acc40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #35793

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Apr 24 20:25:22.851: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1587

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812
Apr 24 16:02:45.435: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:351
Expected error:
    <*errors.errorString | 0xc4203acc40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36649

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc420734150>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 3, less than the min required: 5",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 3, less than the min required: 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:101
Expected error:
    <*errors.errorString | 0xc4223323e0>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63628682157, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63628682157, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63628682226, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63628682226, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-3837372172\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63628682157, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63628682157, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63628682226, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63628682226, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-3837372172\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1335

Issues about this test specifically: #31697 #36574 #39785

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2494/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:40:40.599: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420bfeef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 05:07:39.741: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421868ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38083

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:14:27.847: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42194e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32945

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:33:38.146: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219da4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:52:32.743: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422d384f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:46:08.090: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217bf8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27957

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 05:25:09.704: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42135e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 02:50:46.320: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e48ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:20:41.029: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4222524f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:24:31.544: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e50ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29657

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:05:41.802: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218398f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:49:19.461: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c80ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:46:04.020: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421fe04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:36:52.853: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e484f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:56:22.417: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422540ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:31:14.384: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4229a6ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37274

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:28:01.258: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223924f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 05:21:58.371: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421932ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc420376d10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32684 #36278 #37948

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 02:43:21.505: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218398f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:17:44.962: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d004f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:30:24.771: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422d384f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37515

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:21:12.367: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42237cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:287
Apr 25 04:31:56.792: Expected "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" from server, got ""
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:271

Issues about this test specifically: #27680 #38211

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:41:42.555: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d3a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30263

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 05:28:53.489: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213864f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 02:46:32.739: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e484f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35601

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:17:07.120: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e7d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 05:10:57.592: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217338f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:13:51.792: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220984f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428 #40256

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:23:58.238: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f398f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:49:50.215: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421faaef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:27:11.596: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224e8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32646

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:00:08.085: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4226f98f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421cfc1d0>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-7cf44bc0-zl8n gke-bootstrap-e2e-default-pool-7cf44bc0-zl8n Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:37:14 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:38:06 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:37:14 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-p7ph5                                 gke-bootstrap-e2e-default-pool-7cf44bc0-zl8n Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 22:53:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-24 22:53:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 22:53:04 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-7cf44bc0-zl8n            gke-bootstrap-e2e-default-pool-7cf44bc0-zl8n Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:37:14 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:37:14 -0700 PDT  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-7cf44bc0-zl8n gke-bootstrap-e2e-default-pool-7cf44bc0-zl8n Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:37:14 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:38:06 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:37:14 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-p7ph5                                 gke-bootstrap-e2e-default-pool-7cf44bc0-zl8n Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 22:53:05 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-24 22:53:12 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 22:53:04 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-7cf44bc0-zl8n            gke-bootstrap-e2e-default-pool-7cf44bc0-zl8n Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:37:14 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:37:14 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-24 21:37:14 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 02:38:55.269: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224e84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 05:15:33.535: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e3e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 05:02:14.311: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42178c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 04:38:29.845: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422c2f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 25 03:07:27.272: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f08ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35256

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2503/
Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421e389d0>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc420350bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42216a010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Expected error:
    <*errors.errorString | 0xc420350bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #30981

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc422e30010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:328

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc421962120>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:370

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40
Expected error:
    <*errors.errorString | 0xc420350bf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:317
Expected error:
    <*errors.errorString | 0xc42269a040>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:300

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422e30010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc422da4960>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:315

Issues about this test specifically: #26128 #26685 #33408 #36298

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2506/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 21:21:18.905: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422ef0a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 21:14:51.528: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215d5400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:00:19.899: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218d0000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:19:15.710: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f56a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:08:43.990: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422567400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34212

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:01:51.638: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42218ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:10:34.583: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225aca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32644

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:17:17.232: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219cf400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 21:46:32.046: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225cf400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 22:19:29.222: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225ad400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 20:33:39.742: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421618a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 21:56:13.063: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422818a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:03:33.325: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422edb400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:20:31.255: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ca1400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:55:14.486: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422f2a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32054 #36010

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:14:03.572: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212ea000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422526670>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4d23717d-pg3d gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]\nkube-dns-2185667875-c6f6p                                          gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:20 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\nkube-dns-autoscaler-395097547-b5qzf                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-4d23717d-pg3d            gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:33 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]\nkubernetes-dashboard-3543765157-pbscj                              gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\nl7-default-backend-2234341178-45th3                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4d23717d-pg3d gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]
    kube-dns-2185667875-c6f6p                                          gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:20 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    kube-dns-autoscaler-395097547-b5qzf                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-4d23717d-pg3d            gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:33 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]
    kubernetes-dashboard-3543765157-pbscj                              gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    l7-default-backend-2234341178-45th3                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d0c9e0>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4d23717d-pg3d gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]\nkube-dns-2185667875-c6f6p                                          gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:20 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\nkube-dns-autoscaler-395097547-b5qzf                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-4d23717d-pg3d            gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:33 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]\nkubernetes-dashboard-3543765157-pbscj                              gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\nl7-default-backend-2234341178-45th3                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4d23717d-pg3d gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]
    kube-dns-2185667875-c6f6p                                          gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:20 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    kube-dns-autoscaler-395097547-b5qzf                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-4d23717d-pg3d            gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:33 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]
    kubernetes-dashboard-3543765157-pbscj                              gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    l7-default-backend-2234341178-45th3                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:16:02.467: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422f79400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 20:18:55.197: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219aca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203d2f50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 22:22:55.557: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422184a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35601

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Expected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "Resource usage on node \"gke-bootstrap-e2e-default-pool-4d23717d-pg3d\" is not ready yet",
        },
    ]
    Resource usage on node "gke-bootstrap-e2e-default-pool-4d23717d-pg3d" is not ready yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:105

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209 #43334

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:23:50.839: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42218ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38516

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 22:42:09.186: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422427400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 21:27:43.815: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a77400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:58:29.799: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422124000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 20:29:02.636: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214b9400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31158 #34303

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:48:45.205: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d0e000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 22:16:08.946: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421948000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32087

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 20:25:34.927: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421260000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37515

Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 21:24:32.303: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d95400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29461

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421afa160>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4d23717d-pg3d gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]\nkube-dns-2185667875-c6f6p                                          gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:20 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\nkube-dns-autoscaler-395097547-b5qzf                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-4d23717d-pg3d            gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:33 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]\nkubernetes-dashboard-3543765157-pbscj                              gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\nl7-default-backend-2234341178-45th3                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4d23717d-pg3d gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]
    kube-dns-2185667875-c6f6p                                          gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:20 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    kube-dns-autoscaler-395097547-b5qzf                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-4d23717d-pg3d            gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:33 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]
    kubernetes-dashboard-3543765157-pbscj                              gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    l7-default-backend-2234341178-45th3                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:52:00.164: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421982000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 21:08:28.432: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d0f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:12:17.257: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212eb400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38511

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 21:11:40.185: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213b1400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:06:44.618: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d0e000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 23:27:04.343: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420379400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 22:12:39.434: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224e2a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 29 00:05:26.451: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422f2f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27232

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 21:18:02.855: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219eea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35297

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 21:49:43.641: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc423086a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29710

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4208edba0>: {
        s: "6 / 15 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4d23717d-pg3d gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]\nkube-dns-2185667875-c6f6p                                          gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:20 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\nkube-dns-autoscaler-395097547-b5qzf                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-4d23717d-pg3d            gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:33 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]\nkubernetes-dashboard-3543765157-pbscj                              gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\nl7-default-backend-2234341178-45th3                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]\n",
    }
    6 / 15 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-4d23717d-pg3d gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]
    kube-dns-2185667875-c6f6p                                          gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:20 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    kube-dns-autoscaler-395097547-b5qzf                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:08 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:53 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-4d23717d-pg3d            gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:27:07 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:33 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:41:30 -0700 PDT  }]
    kubernetes-dashboard-3543765157-pbscj                              gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:09 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    l7-default-backend-2234341178-45th3                                gke-bootstrap-e2e-default-pool-4d23717d-pg3d Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-28 18:42:07 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-28 16:28:52 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 28 22:35:29.484: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422524000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc422f6ee40>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-4d23717d-pg3d boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-4d23717d-pg3d boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Apr 28 18:53:08.906: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2510/
Multiple broken tests:

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 02:33:25.592: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f4ac78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 02:24:59.067: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422b53678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 01:36:01.007: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217ef678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31075 #36286 #38041

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
    <*errors.errorString | 0xc4203efe70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 01:45:54.385: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d21678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37071

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 04:30:20.384: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42278b678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36242

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42027f360>: {
        s: "5 / 14 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-08680ca1-f256 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:41:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-46zhm                                 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-08680ca1-f256            gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]\nkubernetes-dashboard-3543765157-rjfmk                              gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]\nl7-default-backend-2234341178-vrr1t                                gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]\n",
    }
    5 / 14 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-08680ca1-f256 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:41:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-46zhm                                 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-08680ca1-f256            gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]
    kubernetes-dashboard-3543765157-rjfmk                              gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]
    l7-default-backend-2234341178-vrr1t                                gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 04:20:08.758: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d88278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 02:12:01.057: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421569678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 01:32:43.165: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421222278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:11:05.426: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422292278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:37:50.263: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421692278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 01:42:41.210: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42241e278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 01:55:32.951: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d42c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:04:40.877: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422e6a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32371

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 02:00:05.125: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219da278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 04:33:31.758: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4229ee278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 01:39:19.655: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bd2278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31889 #36293

Failed: [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:48:17.463: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4224a4c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29752 #36837

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:43:02.456: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e3cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38254

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 02:58:10.862: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d20c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 02:28:12.470: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421569678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28503

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 02:18:32.468: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221ab678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 02:15:18.888: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421d14278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:24:30.930: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4229ee278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:21:13.815: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4226fc278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421836ba0>: {
        s: "5 / 14 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-08680ca1-f256 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:41:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-46zhm                                 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-08680ca1-f256            gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]\nkubernetes-dashboard-3543765157-rjfmk                              gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]\nl7-default-backend-2234341178-vrr1t                                gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]\n",
    }
    5 / 14 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-08680ca1-f256 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:41:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-46zhm                                 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-08680ca1-f256            gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]
    kubernetes-dashboard-3543765157-rjfmk                              gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]
    l7-default-backend-2234341178-vrr1t                                gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42133e570>: {
        s: "5 / 14 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-08680ca1-f256 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:41:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]\nheapster-v1.2.0.1-1382115970-46zhm                                 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-08680ca1-f256            gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]\nkubernetes-dashboard-3543765157-rjfmk                              gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]\nl7-default-backend-2234341178-vrr1t                                gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]\n",
    }
    5 / 14 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-08680ca1-f256 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:41:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]
    heapster-v1.2.0.1-1382115970-46zhm                                 gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-08680ca1-f256            gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:47 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 20:40:50 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-30 00:55:08 -0700 PDT  }]
    kubernetes-dashboard-3543765157-rjfmk                              gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]
    l7-default-backend-2234341178-vrr1t                                gke-bootstrap-e2e-default-pool-08680ca1-f256 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:31 -0700 PDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-04-29 21:35:28 -0700 PDT  }]
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:17:58.809: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217fd678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28346

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 04:26:39.253: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e1cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:30:50.846: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421cf6278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37274

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:54:40.326: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421492c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29050

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 04:23:22.232: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219db678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:01:29.505: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217fcc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 30 03:34:34.167: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4221ab678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2511/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1253
Apr 30 09:03:08.519: Failed getting pod e2e-test-nginx-pod: Timeout while waiting for pods with labels "run=e2e-test-nginx-pod" to be running
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1231

Issues about this test specifically: #29834 #35757

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc42148c500>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:501
Apr 30 12:17:48.685: Pod test-pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:456

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Apr 30 08:02:46.519: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:310
Expected error:
    <*errors.errorString | 0xc4203cee00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #35793

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Apr 30 11:35:24.405: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234
starting pod liveness-http in namespace e2e-tests-container-probe-jn3ww
Expected error:
    <*errors.errorString | 0xc4203cee00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:364

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc422598580>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:20, Replicas:8, UpdatedReplicas:4, AvailableReplicas:6, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63629171816, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63629171816, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63629171877, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63629171877, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-947580331\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:20, Replicas:8, UpdatedReplicas:4, AvailableReplicas:6, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63629171816, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63629171816, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63629171877, nsec:0, loc:(*time.Location)(0x3f60f80)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63629171877, nsec:0, loc:(*time.Location)(0x3f60f80)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-947580331\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:176
Apr 30 10:01:11.116: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38254

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2518/
Multiple broken tests:

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
    <*errors.errorString | 0xc421ad2690>: {
        s: "error waiting for deployment \"test-rolling-update-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rolling-update-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:333

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc421c20180>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:598

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:914
May  2 16:24:20.922: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
May  2 16:55:02.713: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1587

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812
May  2 12:11:03.319: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-staging/2523/
Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc42266f7d0>: {
        s: "expected pod \"pod-8f0e64ef-30c3-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'pod-8f0e64ef-30c3-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-8f0e64ef-30c3-11e7-9c89-0242ac110003" success: gave up waiting for pod 'pod-8f0e64ef-30c3-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #31400

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:51
Expected error:
    <*errors.errorString | 0xc42269ef90>: {
        s: "expected pod \"pod-secrets-287631d4-30d0-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'pod-secrets-287631d4-30d0-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-287631d4-30d0-11e7-9c89-0242ac110003" success: gave up waiting for pod 'pod-secrets-287631d4-30d0-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421c44010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:162
Expected error:
    <*errors.errorString | 0xc42266fb00>: {
        s: "expected pod \"downwardapi-volume-f1d63807-30d4-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'downwardapi-volume-f1d63807-30d4-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-f1d63807-30d4-11e7-9c89-0242ac110003" success: gave up waiting for pod 'downwardapi-volume-f1d63807-30d4-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #36694

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc42188f010>: {
        s: "expected pod \"pod-333320d7-30a6-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'pod-333320d7-30a6-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-333320d7-30a6-11e7-9c89-0242ac110003" success: gave up waiting for pod 'pod-333320d7-30a6-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37500

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
May  4 00:59:25.443: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:141
May  4 05:58:35.559: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:59
Expected error:
    <*errors.errorString | 0xc421c45f10>: {
        s: "expected pod \"pod-configmaps-7b8e0802-30b1-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'pod-configmaps-7b8e0802-30b1-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-7b8e0802-30b1-11e7-9c89-0242ac110003" success: gave up waiting for pod 'pod-configmaps-7b8e0802-30b1-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #32949

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:203
Expected error:
    <*errors.errorString | 0xc421babf30>: {
        s: "expected pod \"downwardapi-volume-93ba42e6-30ae-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'downwardapi-volume-93ba42e6-30ae-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-93ba42e6-30ae-11e7-9c89-0242ac110003" success: gave up waiting for pod 'downwardapi-volume-93ba42e6-30ae-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37531

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:58
Expected error:
    <*errors.errorString | 0xc421baaba0>: {
        s: "expected pod \"downwardapi-volume-94295aa7-30b0-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'downwardapi-volume-94295aa7-30b0-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-94295aa7-30b0-11e7-9c89-0242ac110003" success: gave up waiting for pod 'downwardapi-volume-94295aa7-30b0-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #36300

Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:69
Expected error:
    <*errors.errorString | 0xc421c34050>: {
        s: "expected pod \"pod-8e451fd1-30b6-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'pod-8e451fd1-30b6-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-8e451fd1-30b6-11e7-9c89-0242ac110003" success: gave up waiting for pod 'pod-8e451fd1-30b6-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #36183

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93
Expected error:
    <*errors.errorString | 0xc42264a0e0>: {
        s: "expected pod \"pod-6cd9d1e6-30bd-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'pod-6cd9d1e6-30bd-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-6cd9d1e6-30bd-11e7-9c89-0242ac110003" success: gave up waiting for pod 'pod-6cd9d1e6-30bd-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc42268dd10>: {
        s: "expected pod \"pod-cbaea055-30d5-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'pod-cbaea055-30d5-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-cbaea055-30d5-11e7-9c89-0242ac110003" success: gave up waiting for pod 'pod-cbaea055-30d5-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:501
May  4 07:07:19.135: Pod test-pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:456

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:113
Expected error:
    <*errors.errorString | 0xc42268c220>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
Expected error:
    <*errors.errorString | 0xc42269f7a0>: {
        s: "expected pod \"pod-d896b52c-30d3-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'pod-d896b52c-30d3-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-d896b52c-30d3-11e7-9c89-0242ac110003" success: gave up waiting for pod 'pod-d896b52c-30d3-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #33987

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:196
Expected error:
    <*errors.errorString | 0xc42266bcf0>: {
        s: "expected pod \"downwardapi-volume-c6eb9bb9-30cd-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'downwardapi-volume-c6eb9bb9-30cd-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-c6eb9bb9-30cd-11e7-9c89-0242ac110003" success: gave up waiting for pod 'downwardapi-volume-c6eb9bb9-30cd-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc4217046a0>: {
        s: "expected pod \"pod-configmaps-a83fbe7e-30b5-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'pod-configmaps-a83fbe7e-30b5-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-a83fbe7e-30b5-11e7-9c89-0242ac110003" success: gave up waiting for pod 'pod-configmaps-a83fbe7e-30b5-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #27245

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:180
Expected error:
    <*errors.errorString | 0xc4210cc2a0>: {
        s: "expected pod \"downwardapi-volume-42dc413b-30a5-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'downwardapi-volume-42dc413b-30a5-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-42dc413b-30a5-11e7-9c89-0242ac110003" success: gave up waiting for pod 'downwardapi-volume-42dc413b-30a5-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
May  4 04:55:52.917: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37774

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc4226a4140>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc42282c150>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:101
Expected error:
    <*errors.errorString | 0xc42269f4a0>: {
        s: "expected pod \"pod-0c95cca9-30d3-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'pod-0c95cca9-30d3-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-0c95cca9-30d3-11e7-9c89-0242ac110003" success: gave up waiting for pod 'pod-0c95cca9-30d3-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #37439

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:40
Expected error:
    <*errors.errorString | 0xc422698300>: {
        s: "expected pod \"pod-secrets-f1958f13-30d0-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'pod-secrets-f1958f13-30d0-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-f1958f13-30d0-11e7-9c89-0242ac110003" success: gave up waiting for pod 'pod-secrets-f1958f13-30d0-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

Issues about this test specifically: #35256

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:189
Expected error:
    <*errors.errorString | 0xc421c35590>: {
        s: "expected pod \"downwardapi-volume-aaaa5645-30b3-11e7-9c89-0242ac110003\" success: gave up waiting for pod 'downwardapi-volume-aaaa5645-30b3-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-aaaa5645-30b3-11e7-9c89-0242ac110003" success: gave up waiting for pod 'downwardapi-volume-aaaa5645-30b3-11e7-9c89-0242ac110003' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2177

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 93 days. Closing this Issue. Please reopen if you would like to work towards merging this change, if/when the Issue is ready for the next round of review.

cc @k8s-merge-robot @rmmh

You can add 'keep-open' label to prevent this from happening again, or add a comment to keep it open another 90 days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

4 participants