Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gce: broken test run #43072

Closed
k8s-github-robot opened this issue Mar 14, 2017 · 22 comments
Closed

ci-kubernetes-e2e-gci-gce: broken test run #43072

k8s-github-robot opened this issue Mar 14, 2017 · 22 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5307/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Mar 14 04:57:50.407: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:43
Mar 14 04:43:32.056: dig result did not match: []string{";; connection timed out; no servers could be reached"} after 30s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_common.go:94

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc420a31000>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-14 04:48:11 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-14 04:48:43 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-14 04:48:11 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.6 PodIP:10.180.2.196 StartTime:2017-03-14 04:48:11 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:2017-03-14 04:48:42 -0700 PDT,ContainerID:docker://7d235c6a12882501b2a3a7771c599427dc8be60787b26431557bdfc6eaa9d271,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://7d235c6a12882501b2a3a7771c599427dc8be60787b26431557bdfc6eaa9d271}] QOSClass:BestEffort}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-14 04:48:11 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-14 04:48:43 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-14 04:48:11 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.6 PodIP:10.180.2.196 StartTime:2017-03-14 04:48:11 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:2017-03-14 04:48:42 -0700 PDT,ContainerID:docker://7d235c6a12882501b2a3a7771c599427dc8be60787b26431557bdfc6eaa9d271,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://7d235c6a12882501b2a3a7771c599427dc8be60787b26431557bdfc6eaa9d271}] QOSClass:BestEffort}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Previous issues for this suite: #36933 #37062 #42159

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Mar 14, 2017
@calebamiles calebamiles added this to the v1.6 milestone Mar 14, 2017
@ethernetdan
Copy link
Contributor

Some flakes but otherwise suite seems stable, moving to 1.7

@ethernetdan ethernetdan modified the milestones: v1.7, v1.6 Mar 14, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5354/
Multiple broken tests:

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc4211fd4e0>: {
        s: "error waiting for deployment \"test-recreate-deployment\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63625200942, nsec:383809024, loc:(*time.Location)(0x497fae0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63625200942, nsec:383809183, loc:(*time.Location)(0x497fae0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}}",
    }
    error waiting for deployment "test-recreate-deployment" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63625200942, nsec:383809024, loc:(*time.Location)(0x497fae0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63625200942, nsec:383809183, loc:(*time.Location)(0x497fae0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:322

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:373
Mar 15 11:57:22.393: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1681

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Expected error:
    <*errors.errorString | 0xc4213fe6f0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:396

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5385/
Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:492
Mar 16 05:58:20.310: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:385

Issues about this test specifically: #40977

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:513
Mar 16 05:52:38.909: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:385

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends data, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:481
Mar 16 06:00:42.649: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:315

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5513/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:43
Mar 19 22:12:21.991: dig result did not match: []string{";; connection timed out; no servers could be reached"} after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_common.go:94

Issues about this test specifically: #43100

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:373
Expected error:
    <*errors.errorString | 0xc4216f62b0>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1673

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:710
Timed out after 300.000s.
Expected
    <string>: Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    
to contain substring
    <string>: value-3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:708

Failed: [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:509
Timed out after 300.000s.
Expected
    <string>: content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    content of file "/etc/projected-configmap-volume/data-1": value-1
    
to contain substring
    <string>: value-2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5642/
Multiple broken tests:

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc420eafa20>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-23 09:51:29 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-23 09:52:01 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-23 09:51:29 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.5 PodIP:10.180.3.57 StartTime:2017-03-23 09:51:29 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:2017-03-23 09:52:01 -0700 PDT,ContainerID:docker://e607053be0f5b2195155195aaa6a914282e231500c4ef839fab52807b334296f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://e607053be0f5b2195155195aaa6a914282e231500c4ef839fab52807b334296f}] QOSClass:BestEffort}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-23 09:51:29 -0700 PDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-23 09:52:01 -0700 PDT Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-03-23 09:51:29 -0700 PDT Reason: Message:}] Message: Reason: HostIP:10.240.0.5 PodIP:10.180.3.57 StartTime:2017-03-23 09:51:29 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:2017-03-23 09:52:01 -0700 PDT,ContainerID:docker://e607053be0f5b2195155195aaa6a914282e231500c4ef839fab52807b334296f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://e607053be0f5b2195155195aaa6a914282e231500c4ef839fab52807b334296f}] QOSClass:BestEffort}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] DNS configMap nameserver should be able to change stubDomain configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:195
Mar 23 09:50:38.059: dig result did not match: []string{} after 30s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_common.go:94

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:448
Expected error:
    <*errors.errorString | 0xc4203d2eb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:230

Issues about this test specifically: #28337

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5783/
Multiple broken tests:

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-s5kng" to be ready
Expected error:
    <*errors.errorString | 0xc4204751a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32644

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:343
Mar 26 17:48:23.581: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:303

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:266
Expected error:
    <*errors.errorString | 0xc4203d03c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:664

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:66
Expected error:
    <*errors.StatusError | 0xc4204a2680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'read tcp 10.240.0.2:40570->10.240.0.3:10250: read: connection reset by peer'\\nTrying to reach: 'https://bootstrap-e2e-minion-group-70bn:10250/logs/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'read tcp 10.240.0.2:40570->10.240.0.3:10250: read: connection reset by peer'\nTrying to reach: 'https://bootstrap-e2e-minion-group-70bn:10250/logs/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'read tcp 10.240.0.2:40570->10.240.0.3:10250: read: connection reset by peer'\nTrying to reach: 'https://bootstrap-e2e-minion-group-70bn:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #35601

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-vxpj8" to be ready
Expected error:
    <*errors.errorString | 0xc42040a7a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32753 #34676

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: TearDown {e2e.go}

error during ./hack/e2e-internal/e2e-down.sh: signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5795/
Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:318
Expected error:
    <*errors.errorString | 0xc4214f0b80>: {
        s: "Only 2 pods started out of 3",
    }
    Only 2 pods started out of 3
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:277

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*errors.errorString | 0xc420414d20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154

Issues about this test specifically: #32023

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:92
Expected error:
    <*errors.errorString | 0xc421012f50>: {
        s: "Only 1 pods started out of 2",
    }
    Only 1 pods started out of 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:422

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:343
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.76.121 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-w09mz] []  0xc420bb0360 Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n error: timed out waiting for any update progress to be made\n [] <nil> 0xc4213ef800 exit status 1 <nil> <nil> true [0xc421590120 0xc421590148 0xc421590158] [0xc421590120 0xc421590148 0xc421590158] [0xc421590128 0xc421590140 0xc421590150] [0xc24d40 0xc24e40 0xc24e40] 0xc4211e2c60 <nil>}:\nCommand stdout:\nCreated update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\n\nstderr:\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.76.121 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-w09mz] []  0xc420bb0360 Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
     error: timed out waiting for any update progress to be made
     [] <nil> 0xc4213ef800 exit status 1 <nil> <nil> true [0xc421590120 0xc421590148 0xc421590158] [0xc421590120 0xc421590148 0xc421590158] [0xc421590128 0xc421590140 0xc421590150] [0xc24d40 0xc24e40 0xc24e40] 0xc4211e2c60 <nil>}:
    Command stdout:
    Created update-demo-kitten
    Scaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling update-demo-kitten up to 1
    
    stderr:
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2097

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:441
Expected error:
    <*errors.errorString | 0xc4203a0590>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Issues about this test specifically: #33985

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Expected error:
    <*errors.errorString | 0xc42028e820>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:19, ReadyReplicas:18, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63626197456, nsec:0, loc:(*time.Location)(0x499b9e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63626197456, nsec:0, loc:(*time.Location)(0x499b9e0)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:19, ReadyReplicas:18, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63626197456, nsec:0, loc:(*time.Location)(0x499b9e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63626197456, nsec:0, loc:(*time.Location)(0x499b9e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1007

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5834/
Multiple broken tests:

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:55
failed to execute command in pod test-host-network-pod, container busybox-2: Internal error occurred: error executing command in container: Error response from daemon: No such exec instance 'f4617c2dcc128a466060e7f06fb8af2789c335f884421d1b5d3b15b64b2540fd' found in daemon
Expected error:
    <*errors.errorString | 0xc42102a8c0>: {
        s: "Internal error occurred: error executing command in container: Error response from daemon: No such exec instance 'f4617c2dcc128a466060e7f06fb8af2789c335f884421d1b5d3b15b64b2540fd' found in daemon",
    }
    Internal error occurred: error executing command in container: Error response from daemon: No such exec instance 'f4617c2dcc128a466060e7f06fb8af2789c335f884421d1b5d3b15b64b2540fd' found in daemon
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:107

Issues about this test specifically: #37502

Failed: [k8s.io] Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/garbage_collector.go:477
Mar 27 22:58:17.617: remaining rs post mortem: &v1beta1.ReplicaSetList{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"/apis/extensions/v1beta1/namespaces/e2e-tests-gc-ggbb6/replicasets", ResourceVersion:"11275"}, Items:[]v1beta1.ReplicaSet(nil)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/garbage_collector.go:457

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:339
Timed out after 300.001s.
Expected
    <string>: Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:336

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5887/
Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:492
Mar 29 06:26:33.869: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:385

Issues about this test specifically: #40977

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects no client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:508
Mar 29 06:22:52.849: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:214

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects no client request should support a client that connects, sends data, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:487
Mar 29 06:24:27.857: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:214

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:945
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:944

Issues about this test specifically: #28493 #29964

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5939/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:92
Mar 30 12:57:50.248: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Mar 30 12:55:36.865: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc420e7eb10>: {
        s: "deployment \"test-recreate-deployment\" is running new pods alongside old pods: v1beta1.DeploymentStatus{ObservedGeneration:2, Replicas:6, UpdatedReplicas:3, ReadyReplicas:3, AvailableReplicas:3, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63626499631, nsec:0, loc:(*time.Location)(0x49cc260)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63626499631, nsec:0, loc:(*time.Location)(0x49cc260)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    deployment "test-recreate-deployment" is running new pods alongside old pods: v1beta1.DeploymentStatus{ObservedGeneration:2, Replicas:6, UpdatedReplicas:3, ReadyReplicas:3, AvailableReplicas:3, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63626499631, nsec:0, loc:(*time.Location)(0x49cc260)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63626499631, nsec:0, loc:(*time.Location)(0x49cc260)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:333

Issues about this test specifically: #29197 #36289 #36598 #38528

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5940/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:363
Expected error:
    <*errors.errorString | 0xc42042ce10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:247

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:206
Expected error:
    <*errors.errorString | 0xc4203d22d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-8k44v" to be ready
Expected error:
    <*errors.errorString | 0xc4203d0620>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32644

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:194
Expected error:
    <*errors.errorString | 0xc420fda600>: {
        s: "Failed to execute a successful GET within 15m0s, Last response body for http://35.188.60.178/foo, host foo.bar.com:\n<html>\r\n<head><title>503 Service Temporarily Unavailable</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>503 Service Temporarily Unavailable</h1></center>\r\n<hr><center>nginx/1.11.9</center>\r\n</body>\r\n</html>\r\n\n\ntimed out waiting for the condition\n",
    }
    Failed to execute a successful GET within 15m0s, Last response body for http://35.188.60.178/foo, host foo.bar.com:
    <html>
    <head><title>503 Service Temporarily Unavailable</title></head>
    <body bgcolor="white">
    <center><h1>503 Service Temporarily Unavailable</h1></center>
    <hr><center>nginx/1.11.9</center>
    </body>
    </html>
    
    
    timed out waiting for the condition
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ingress_utils.go:925

Issues about this test specifically: #38556

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:75
Waiting for pods in namespace "e2e-tests-disruption-5zbf2" to be ready
Expected error:
    <*errors.errorString | 0xc4203d2a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:448
Expected error:
    <*errors.errorString | 0xc4204502d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:247

Issues about this test specifically: #28337

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-hf9j2" to be ready
Expected error:
    <*errors.errorString | 0xc42043eb70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257

Issues about this test specifically: #32753 #34676

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5960/
Multiple broken tests:

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc4210e4a00>: {
        s: "deployment \"test-recreate-deployment\" is running new pods alongside old pods: v1beta1.DeploymentStatus{ObservedGeneration:2, Replicas:6, UpdatedReplicas:3, ReadyReplicas:3, AvailableReplicas:3, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63626546121, nsec:0, loc:(*time.Location)(0x49d23e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63626546121, nsec:0, loc:(*time.Location)(0x49d23e0)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    deployment "test-recreate-deployment" is running new pods alongside old pods: v1beta1.DeploymentStatus{ObservedGeneration:2, Replicas:6, UpdatedReplicas:3, ReadyReplicas:3, AvailableReplicas:3, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63626546121, nsec:0, loc:(*time.Location)(0x49d23e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63626546121, nsec:0, loc:(*time.Location)(0x49d23e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:333

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Volumes [Volume] [k8s.io] PD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:851
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/disks/bootstrap-e2e-b66200e6-15ed-11e7-9d3b-0242ac110009' is already being used by 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/instances/bootstrap-e2e-minion-group-wfrv', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/disks/bootstrap-e2e-b66200e6-15ed-11e7-9d3b-0242ac110009' is already being used by 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/instances/bootstrap-e2e-minion-group-wfrv', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:628

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-b66200e6-15ed-11e7-9d3b-0242ac110009              us-central1-f  10       pd-ssd       READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/5986/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-bdad1bbc-166b-11e7-aa06-0242ac110008              us-central1-f  10       pd-ssd       READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:59
Expected error:
    <*errors.errorString | 0xc420dcf9f0>: {
        s: "expected \"mode of file \\\"/etc/podname\\\": -r--------\" in container output: Expected\n    <string>: 1:M 31 Mar 23:38:57.339 * The server is now ready to accept connections on port 6379\n    \nto contain substring\n    <string>: mode of file \"/etc/podname\": -r--------",
    }
    expected "mode of file \"/etc/podname\": -r--------" in container output: Expected
        <string>: 1:M 31 Mar 23:38:57.339 * The server is now ready to accept connections on port 6379
        
    to contain substring
        <string>: mode of file "/etc/podname": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2184

Failed: [k8s.io] Volumes [Volume] [k8s.io] PD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:851
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/disks/bootstrap-e2e-bdad1bbc-166b-11e7-aa06-0242ac110008' is already being used by 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/instances/bootstrap-e2e-minion-group-cfpp', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/disks/bootstrap-e2e-bdad1bbc-166b-11e7-aa06-0242ac110008' is already being used by 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/instances/bootstrap-e2e-minion-group-cfpp', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:540

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/6164/
Multiple broken tests:

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:497
failed: finding the contents of the mounted file.
Expected error:
    <*errors.errorString | 0xc420a03980>: {
        s: "Failed to find \"Hello from GlusterFS!\", last result: \"\"",
    }
    Failed to find "Hello from GlusterFS!", last result: ""
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:255

Failed: [k8s.io] Volumes [Volume] [k8s.io] PD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:851
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/disks/bootstrap-e2e-a83878e0-195c-11e7-bf27-0242ac110009' is already being used by 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/instances/bootstrap-e2e-minion-group-mp4q', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/disks/bootstrap-e2e-a83878e0-195c-11e7-bf27-0242ac110009' is already being used by 'projects/k8s-jkns-e2e-gce-gci/zones/us-central1-f/instances/bootstrap-e2e-minion-group-mp4q', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:540

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-a83878e0-195c-11e7-bf27-0242ac110009  us-central1-f  10       pd-ssd  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/6314/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc420477810>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:527

Issues about this test specifically: #32375

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:20:49.897: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:18:16.242: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:27:35.638: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #28084

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:30:28.460: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #35297

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:11:54.845: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:44:52.325: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:24:00.136: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #35422

Failed: [k8s.io] ReplicaSet should adopt matching pods on creation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:30:36.207: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:14:12.250: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:27:00.863: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:38:26.485: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:18:34.109: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:16:34.620: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:43:17.914: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #28584 #32045 #34833 #35429 #35442 #35461 #36969

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:15:28.606: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #30981

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:14:37.882: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:26:46.740: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #28503

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:47:08.897: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:31:04.332: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:35:21.964: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:11:50.925: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #32054 #36010

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:15:56.132: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Projected should set DefaultMode on files [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:30:33.788: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:27:18.219: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:28:30.486: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #29831

Failed: [k8s.io] HostPath should support r/w [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:31:17.030: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:21:34.969: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #33008

Failed: [k8s.io] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:43:47.612: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:22:01.561: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241 #43425

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Apr  7 14:12:38.751: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'dial tcp 10.240.0.5:10250: getsockopt: connection refused'\nTrying to reach: 'https://bootstrap-e2e-minion-group-d4h5:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:39:33.929: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:14:49.937: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:11:46.155: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:15:05.692: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #38516

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:24:12.615: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Firewall rule should have correct firewall rules for e2e cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:34:07.246: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:15:09.300: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:24:53.248: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:21:36.806: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:15:03.736: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
Expected error:
    <*errors.errorString | 0xc4208089e0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:422

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:43:26.102: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #37914

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:36:12.319: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:12:10.312: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #30632

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:14:40.144: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #36288 #36913

Failed: [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:22:51.780: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Projected should provide container's cpu limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:21:12.706: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:11:49.021: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #35601

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:28:40.839: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #34520

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:12:34.549: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:25:15.138: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #30264

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:30:01.006: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #31938

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:24:54.531: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #26191

Failed: [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:20:08.520: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #32639

Failed: [k8s.io] Projected should provide podname only [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:21:40.231: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:15:03.875: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] InitContainer should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:21:10.620: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #31873

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:26:58.914: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:45:58.416: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:11:29.169: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:23:59.409: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:18:54.111: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:31:45.588: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:19:39.471: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #35579

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:26:06.795: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #28003

Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/dns_autoscaling.go:207
Expected error:
    <*errors.errorString | 0xc421158ed0>: {
        s: "err waiting for DNS replicas to satisfy 3, got 4: timed out waiting for the condition",
    }
    err waiting for DNS replicas to satisfy 3, got 4: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/dns_autoscaling.go:206

Issues about this test specifically: #36569 #38446

Failed: [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:43:44.086: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203aa5e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:527

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:46:38.481: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:15:57.140: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #26126 #30653 #36408

Failed: [k8s.io] CronJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:20:07.777: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:12:37.708: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:40:05.716: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:18:53.862: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:19:52.580: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:21:32.558: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] CronJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:20:07.313: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] ReplicationController should adopt matching pods on creation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:23:54.155: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:27:17.118: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #30263

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:35:14.201: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:85
Expected error:
    <*errors.errorString | 0xc42043cc70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:527

Issues about this test specifically: #32436 #37267

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:240
Apr  7 14:47:23.682: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:22:18.342: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:28:47.199: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:32:36.861: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends no data, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:15:08.697: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:32:53.231: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #27079

Failed: [k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:40:31.728: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc420347360>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #32830

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:33:44.119: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #29710

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:15:15.457: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:31:07.398: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #36649

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:375
Expected error:
    <*errors.errorString | 0xc4213a0490>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1718

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects no client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:11:58.394: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:29:40.932: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:21:59.590: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:17:52.355: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #36706

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:42:34.943: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #29050

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:17:50.749: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:20:34.698: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #37428 #40256

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:46:16.444: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #29513

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:14:53.305: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:30:56.916: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Projected should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:888
Timed out after 120.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
	<*errors.StatusError>: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"unknown\") has prevented the request from succeeding (get pods labelsupdate3b7128b5-1bd6-11e7-8010-0242ac110008)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc421252140), Code:500}}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:887

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:17:24.513: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:22:24.439: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:24:02.737: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:37:19.458: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:46:54.889: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:35:05.416: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:22:06.505: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:21:01.944: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #36948

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:28:03.585: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #27195

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:12:06.639: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:27:45.099: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] CronJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:12:08.190: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:18:13.877: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:24:00.242: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:19:36.208: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:27:15.018: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:31:34.162: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:154
Timed out after 120.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
	<*errors.StatusError>: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"unknown\") has prevented the request from succeeding (get pods annotationupdate21418a16-1bd6-11e7-8096-0242ac110008)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420fcc500), Code:500}}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:43:06.059: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:15:32.018: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:24:50.512: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:36:38.399: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] CronJob should remove from active list jobs that have been deleted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:30:17.066: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:25:14.088: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:16:03.492: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-d4h5"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:121
Apr  7 14:25:25.962: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-mini

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/6444/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:375
Expected error:
    <*errors.errorString | 0xc4213e6050>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1718

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Apr 10 07:17:25.072: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.180.1.80:8080/dial?request=hostName&protocol=http&host=10.180.2.77&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:217

Issues about this test specifically: #32375

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:339
Timed out after 300.001s.
Expected
    <string>: Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/configmap-volumes/create/data-1: open /etc/configmap-volumes/create/data-1: no such file or directory, retrying
    
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:336

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/6464/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203c14a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:85
Expected error:
    <*errors.errorString | 0xc4203e7b70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:492
Apr 10 17:56:20.867: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:385

Issues about this test specifically: #40977

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc4203c67c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #32375

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203e5fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #32830

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc42044fe50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:551

Issues about this test specifically: #33631 #33995 #34970

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/6672/
Multiple broken tests:

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:65
Expected error:
    <*errors.errorString | 0xc42107a200>: {
        s: "expected \"[/ep-2 override arguments]\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: [/ep-2 override arguments]",
    }
    expected "[/ep-2 override arguments]" in container output: Expected
        <string>: 
    to contain substring
        <string>: [/ep-2 override arguments]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218

Issues about this test specifically: #29467

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:710
Timed out after 300.001s.
Expected
    <string>: Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/update/data-3: open /etc/projected-configmap-volumes/update/data-3: no such file or directory, retrying
    
to contain substring
    <string>: value-3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:708

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: TearDown {e2e.go}

error during ./hack/e2e-internal/e2e-down.sh: signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:242
Expected error:
    <*errors.errorString | 0xc4211ee6a0>: {
        s: "gave up waiting for pod 'pvc-tester-rvsfc' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pvc-tester-rvsfc' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:395

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:422
Expected error:
    <*errors.errorString | 0xc4203a1dd0>: {
        s: "watch closed before Until timeout",
    }
    watch closed before Until timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:421

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/6867/
Multiple broken tests:

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:270
Expected error:
    <*errors.errorString | 0xc4213405d0>: {
        s: "pod \"pvc-tester-nr4fd\" did not exit with Success: pod \"pvc-tester-nr4fd\" failed to reach Success: gave up waiting for pod 'pvc-tester-nr4fd' to be 'success or failure' after 5m0s",
    }
    pod "pvc-tester-nr4fd" did not exit with Success: pod "pvc-tester-nr4fd" failed to reach Success: gave up waiting for pod 'pvc-tester-nr4fd' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:269

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:57
Expected error:
    <*errors.StatusError | 0xc421037a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'read tcp 10.128.0.2:54312->10.128.0.4:10250: read: connection reset by peer'\\nTrying to reach: 'https://bootstrap-e2e-minion-group-2cpg:10250/metrics'\") has prevented the request from succeeding (get nodes bootstrap-e2e-minion-group-2cpg:10250)",
            Reason: "InternalError",
            Details: {
                Name: "bootstrap-e2e-minion-group-2cpg:10250",
                Group: "",
                Kind: "nodes",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'read tcp 10.128.0.2:54312->10.128.0.4:10250: read: connection reset by peer'\nTrying to reach: 'https://bootstrap-e2e-minion-group-2cpg:10250/metrics'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'read tcp 10.128.0.2:54312->10.128.0.4:10250: read: connection reset by peer'\nTrying to reach: 'https://bootstrap-e2e-minion-group-2cpg:10250/metrics'") has prevented the request from succeeding (get nodes bootstrap-e2e-minion-group-2cpg:10250)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:55

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:154
Timed out after 120.000s.
Expected
    <string>: 
to contain substring
    <string>: builder="foo"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Apr 19 07:50:29.148: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.100.3.94 8081
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:270

Issues about this test specifically: #35283 #36867

@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 51 days. It will be closed in 38 days (Jun 12, 2017).

cc @k8s-merge-robot @spxtr

You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days

@spiffxp
Copy link
Member

spiffxp commented May 31, 2017

/sig testing
/assign

I'm going to close this given how inactive it's been

@k8s-ci-robot k8s-ci-robot added the sig/testing Categorizes an issue or PR as relevant to SIG Testing. label May 31, 2017
@spiffxp
Copy link
Member

spiffxp commented May 31, 2017

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
None yet
Development

No branches or pull requests

6 participants