Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gci-gce-proto: broken test run #43454

Closed
k8s-github-robot opened this issue Mar 21, 2017 · 28 comments
Closed

ci-kubernetes-e2e-gci-gce-proto: broken test run #43454

k8s-github-robot opened this issue Mar 21, 2017 · 28 comments
Assignees
Labels
area/api Indicates an issue on api area. kind/flake Categorizes issue or PR as related to a flaky test. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/5679/
Multiple broken tests:

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:57
Expected error:
    <*errors.StatusError | 0xc4211ae100>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'read tcp 10.240.0.2:33746->10.240.0.4:10250: read: connection reset by peer'\\nTrying to reach: 'https://bootstrap-e2e-minion-group-4qs9:10250/metrics'\") has prevented the request from succeeding (get nodes bootstrap-e2e-minion-group-4qs9:10250)",
            Reason: "InternalError",
            Details: {
                Name: "bootstrap-e2e-minion-group-4qs9:10250",
                Group: "",
                Kind: "nodes",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'read tcp 10.240.0.2:33746->10.240.0.4:10250: read: connection reset by peer'\nTrying to reach: 'https://bootstrap-e2e-minion-group-4qs9:10250/metrics'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'read tcp 10.240.0.2:33746->10.240.0.4:10250: read: connection reset by peer'\nTrying to reach: 'https://bootstrap-e2e-minion-group-4qs9:10250/metrics'") has prevented the request from succeeding (get nodes bootstrap-e2e-minion-group-4qs9:10250)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/metrics_grabber_test.go:55

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] DNS configMap federations should be able to change federation configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:43
Expected error:
    <*errors.errorString | 0xc4203d3ce0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_common.go:268

Issues about this test specifically: #43100

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:122
Mar 21 05:56:17.454: pod e2e-tests-container-probe-qg742/liveness-exec - expected number of restarts: 1, found restarts: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:404

Issues about this test specifically: #30264

Previous issues for this suite: #36946 #37034 #40447 #42100 #43345

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Mar 21, 2017
@calebamiles calebamiles modified the milestone: v1.6 Mar 21, 2017
@ethernetdan
Copy link
Contributor

bad run

@ethernetdan ethernetdan modified the milestones: v1.7, v1.6 Mar 21, 2017
@fejta fejta added area/api Indicates an issue on api area. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed team/test-infra labels Mar 21, 2017
@fejta
Copy link
Contributor

fejta commented Mar 21, 2017

https://k8s-testgrid.appspot.com/google-gce#gci-gce-proto&width=20&sort-by-failures=

/assign @lavalamp

Daniel, do you know who cares about the gci-gce-proto job? I'm inclined to delete this job entirely unless I can find an owner for it. It looks like is flakes a bunch of DNS and other tests.

@fejta fejta removed their assignment Mar 21, 2017
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/5735/
Multiple broken tests:

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends no data, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:478
Mar 22 19:07:38.510: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:270

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc420f6c6d0>: {
        s: "error waiting for deployment \"test-recreate-deployment\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{sec:63625831470, nsec:575301179, loc:(*time.Location)(0x4994c40)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63625831470, nsec:575301329, loc:(*time.Location)(0x4994c40)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}}",
    }
    error waiting for deployment "test-recreate-deployment" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:3, UpdatedReplicas:3, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{sec:63625831470, nsec:575301179, loc:(*time.Location)(0x4994c40)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63625831470, nsec:575301329, loc:(*time.Location)(0x4994c40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:322

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:492
Mar 22 19:12:09.736: Pod did not start running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:385

Issues about this test specifically: #40977

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/5856/
Multiple broken tests:

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64
Expected error:
    <*errors.StatusError | 0xc420e6b580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'read tcp 10.240.0.2:48338->10.240.0.5:4194: read: connection reset by peer'\\nTrying to reach: 'http://bootstrap-e2e-minion-group-m7fb:4194/containers/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'read tcp 10.240.0.2:48338->10.240.0.5:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-m7fb:4194/containers/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'read tcp 10.240.0.2:48338->10.240.0.5:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-m7fb:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #35297

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1035
expected recent(1) to be less than older(1)
recent lines:

older lines:


Expected
    <int>: 1
to be <
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1033

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Mar 25 17:07:17.587: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.180.2.157:8080/hostName
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:269

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:129
Expected error:
    <*errors.errorString | 0xc420f89730>: {
        s: "failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.75.235 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-gpgwb ss-1 -- /bin/sh -c ls -idlh /data] []  <nil>  error: unable to upgrade connection: container not found (\"nginx\")\n [] <nil> 0xc420d73260 exit status 1 <nil> <nil> true [0xc4200360f8 0xc420036110 0xc420036128] [0xc4200360f8 0xc420036110 0xc420036128] [0xc420036108 0xc420036120] [0xc3e090 0xc3e090] 0xc4212f27e0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: unable to upgrade connection: container not found (\"nginx\")\n\nerror:\nexit status 1\n",
    }
    failed to execute ls -idlh /data, error: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.75.235 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-gpgwb ss-1 -- /bin/sh -c ls -idlh /data] []  <nil>  error: unable to upgrade connection: container not found ("nginx")
     [] <nil> 0xc420d73260 exit status 1 <nil> <nil> true [0xc4200360f8 0xc420036110 0xc420036128] [0xc4200360f8 0xc420036110 0xc420036128] [0xc420036108 0xc420036120] [0xc3e090 0xc3e090] 0xc4212f27e0 <nil>}:
    Command stdout:
    
    stderr:
    error: unable to upgrade connection: container not found ("nginx")
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:107

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/5954/
Multiple broken tests:

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:55:36.836: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:53:57.941: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] ServiceAccounts should allow opting out of API token automount [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:58:44.735: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:58:07.937: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #36109

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:57:22.040: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #27524 #32057

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:50:14.489: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:54:57.600: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #26168 #27450 #43094

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Mar 28 02:02:17.283: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'dial tcp 10.240.0.5:10250: getsockopt: connection refused'\nTrying to reach: 'https://bootstrap-e2e-minion-group-mf68:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:57:24.559: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:53:33.899: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:23:55.602: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:21:33.546: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #27507 #28275 #38583

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:00:21.682: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28003

Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:85
Expected error:
    <*errors.errorString | 0xc42044ec90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:526

Issues about this test specifically: #32436 #37267

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:07:38.779: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #43335

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:11:56.759: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:53:34.096: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #29828

Failed: [k8s.io] Generated release_1_5 clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:54:45.234: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:14:40.479: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #33008

Failed: [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:06:33.784: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:00:00.243: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:57:10.162: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:01:05.392: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:03:40.922: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:07:11.817: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:02:21.712: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:08:49.080: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:46:51.549: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:08:35.863: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:10:59.475: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #32644

Failed: [k8s.io] AppArmor should enforce an AppArmor profile {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:47:54.784: Couldn't delete ns: "e2e-tests-apparmor-mt1nz": namespace e2e-tests-apparmor-mt1nz was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-apparmor-mt1nz was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:48:19.088: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:15:03.050: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Service endpoints latency should not be very high [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:47:20.056: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #30632

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:46:47.971: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:00:26.962: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #35297

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:09:37.672: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #32584

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:53:21.527: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #37435

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:55:35.171: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:00:22.377: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:02:56.395: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #27195

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:11:20.328: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:54:07.618: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:50:23.605: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] CronJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:48:10.370: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:18:02.656: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Garbage collector should delete RS created by deployment when not orphaning {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:53:35.305: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:01:20.300: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:47:01.625: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:08:22.520: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:55:32.933: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28348 #36703

Failed: AfterSuite {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:237
Mar 28 02:24:45.753: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #29933 #34111 #38765 #43286

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:50:22.128: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:22:48.458: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #31938

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:47:32.125: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #37914

Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:18:21.634: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28106 #35197 #37482

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:01:16.659: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Projected should provide podname only [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:08:17.699: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:46:58.416: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #31408

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:02:01.620: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:10:34.128: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] Projected should update annotations on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:01:13.797: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:05:05.374: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:06:26.826: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28067 #28378 #32692 #33256 #34654

Failed: [k8s.io] ConfigMap should be consumable via the environment [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:58:25.010: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] CronJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:06:01.445: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:59:46.067: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:23:00.085: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:11:29.318: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28346

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:47:00.849: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:13:10.062: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #38556

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:08:23.398: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:48:06.218: Couldn't delete ns: "e2e-tests-kubectl-4mt4w": namespace e2e-tests-kubectl-4mt4w was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-4mt4w was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:270

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:20:45.173: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:15:42.003: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:49:01.927: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:00:03.353: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:58:49.406: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #31502 #32947 #38646

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:08:02.172: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:16:06.975: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28084

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:00:48.305: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #37027

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply apply set/view last-applied {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:14:27.127: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:22:47.279: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #34623 #34713 #36890 #37012 #37241 #43425

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:03:21.562: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #38516

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects no client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:00:53.006: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:00:26.426: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:07:32.217: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:48:05.884: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:53:28.099: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] Projected should provide container's memory limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:57:30.819: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:14:41.601: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:452
Expected
    <*errors.errorString | 0xc420415ec0>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:436

Issues about this test specifically: #32054 #36010

Failed: [k8s.io] DisruptionController should update PodDisruptionBudget status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:04:34.884: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #34119 #37176

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:51:35.035: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:11:50.699: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:05:00.282: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #30981

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:46:57.373: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:55:12.659: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #32023

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:51:18.204: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:59:09.445: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:54:11.335: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #34520

Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:59:22.186: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] PodPreset should not modify the pod on conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:18:07.761: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:09:46.152: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:11:50.254: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #30264

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:05:36.851: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:58:28.845: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:47:08.046: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #32087

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:11:48.104: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #27079

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:15:22.249: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:54:03.354: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #38511

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:50:11.133: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #36242

Failed: [k8s.io] Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:51:00.038: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:46:56.638: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:07:47.933: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:03:59.532: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #36554

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:12:20.760: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:12:01.296: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #36706

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:57:44.003: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:05:09.841: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #35601

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:50:19.569: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:00:30.064: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #34372

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:16:47.421: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends data, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:01:44.477: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] PrivilegedPod should enable privileged commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:56:46.696: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:02:10.450: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Sysctls should reject invalid sysctls {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:50:42.292: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] CronJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:11:02.681: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 01:50:55.320: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #32936

Failed: [k8s.io] Projected should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:14:33.999: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:18:37.577: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test specifically: #29467

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:120
Mar 28 02:01:01.213: All nodes should be ready after test, Not ready nodes: ", bootstrap-e2e-minion-group-mf68"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335

Issues about this test

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6005/
Multiple broken tests:

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:283
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] Secrets optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:341
Timed out after 240.000s.
Expected
    <string>: 
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:338

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:422
Expected error:
    <*errors.errorString | 0xc4204146a0>: {
        s: "watch closed before Until timeout",
    }
    watch closed before Until timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:421

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: TearDown {e2e.go}

error during ./hack/e2e-internal/e2e-down.sh: signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

@lavalamp lavalamp assigned wojtek-t and unassigned lavalamp Mar 30, 2017
@lavalamp
Copy link
Member

@fejta I have no idea what this is, maybe @wojtek-t knows?

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6061/
Multiple broken tests:

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-20a52bc8-15a6-11e7-afc3-0242ac110003  us-central1-f  10       pd-ssd  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: [k8s.io] Volumes [Volume] [k8s.io] PD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:851
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-20a52bc8-15a6-11e7-afc3-0242ac110003' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-4d59', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-20a52bc8-15a6-11e7-afc3-0242ac110003' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-4d59', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:628

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:68
Expected error:
    <*errors.StatusError | 0xc420ffd580>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'dial tcp: lookup bootstrap-e2e-minion-group-48pm on 169.254.169.254:53: no such host'\\nTrying to reach: 'http://bootstrap-e2e-minion-group-48pm:4194/containers/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'dial tcp: lookup bootstrap-e2e-minion-group-48pm on 169.254.169.254:53: no such host'\nTrying to reach: 'http://bootstrap-e2e-minion-group-48pm:4194/containers/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'dial tcp: lookup bootstrap-e2e-minion-group-48pm on 169.254.169.254:53: no such host'\nTrying to reach: 'http://bootstrap-e2e-minion-group-48pm:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #37435

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@wojtek-t
Copy link
Member

Yeah - I set this up in the past. Given that protobufs are already default we should probably remove this suite. But sure - will take a look into it later.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6079/
Multiple broken tests:

Failed: [k8s.io] Volumes [Volume] [k8s.io] PD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:851
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-02331792-15fa-11e7-a8c2-0242ac110009' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-rbjg', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-02331792-15fa-11e7-a8c2-0242ac110009' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-rbjg', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:628

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc42145cc40>: {
        s: "deployment \"test-recreate-deployment\" is running new pods alongside old pods: v1beta1.DeploymentStatus{ObservedGeneration:2, Replicas:6, UpdatedReplicas:3, ReadyReplicas:3, AvailableReplicas:3, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63626551883, nsec:749757972, loc:(*time.Location)(0x49d23e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63626551883, nsec:749758111, loc:(*time.Location)(0x49d23e0)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    deployment "test-recreate-deployment" is running new pods alongside old pods: v1beta1.DeploymentStatus{ObservedGeneration:2, Replicas:6, UpdatedReplicas:3, ReadyReplicas:3, AvailableReplicas:3, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63626551883, nsec:749757972, loc:(*time.Location)(0x49d23e0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63626551883, nsec:749758111, loc:(*time.Location)(0x49d23e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:333

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-02331792-15fa-11e7-a8c2-0242ac110009  us-central1-f  10       pd-ssd  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6083/
Multiple broken tests:

Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:125
Timed out after 120.001s.
Expected
    <string>: content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    
to contain substring
    <string>: key3="value3"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124

Issues about this test specifically: #43335

Failed: [k8s.io] Volumes [Volume] [k8s.io] PD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:851
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-95d4a682-160f-11e7-b461-0242ac110006' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-bsm7', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-95d4a682-160f-11e7-b461-0242ac110006' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-bsm7', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:628

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-95d4a682-160f-11e7-b461-0242ac110006  us-central1-f  10       pd-ssd  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6136/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Apr  1 11:26:38.873: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.180.2.72:8080/dial?request=hostName&protocol=http&host=10.180.1.61&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:216

Issues about this test specifically: #32375

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:67
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:479

Failed: [k8s.io] Projected should provide podname only [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:812
Expected error:
    <*errors.errorString | 0xc420d26210>: {
        s: "expected \"downwardapi-volume-041325e5-1708-11e7-88bb-0242ac110008\\n\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: downwardapi-volume-041325e5-1708-11e7-88bb-0242ac110008\n    ",
    }
    expected "downwardapi-volume-041325e5-1708-11e7-88bb-0242ac110008\n" in container output: Expected
        <string>: 
    to contain substring
        <string>: downwardapi-volume-041325e5-1708-11e7-88bb-0242ac110008
        
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2184

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6190/
Multiple broken tests:

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:235
Apr  2 15:48:11.630: pod e2e-tests-container-probe-kd4lf/liveness-http - expected number of restarts: 0, found restarts: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:404

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:422
Expected error:
    <*errors.errorString | 0xc42040f270>: {
        s: "watch closed before Until timeout",
    }
    watch closed before Until timeout
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:421

Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:206
Expected error:
    <*strconv.NumError | 0xc4218808d0>: {
        Func: "ParseInt",
        Num: "",
        Err: {s: "invalid syntax"},
    }
    strconv.ParseInt: parsing "": invalid syntax
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:193

Issues about this test specifically: #36288 #36913

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6216/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1102
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.201.237 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-f5qkm] []  <nil>  Unable to connect to the server: dial tcp 35.184.201.237:443: i/o timeout\n [] <nil> 0xc4211ff140 exit status 1 <nil> <nil> true [0xc420c9ae98 0xc420c9aeb0 0xc420c9aec8] [0xc420c9ae98 0xc420c9aeb0 0xc420c9aec8] [0xc420c9aea8 0xc420c9aec0] [0x8dae50 0x8dae50] 0xc4211f0900 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 35.184.201.237:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.201.237 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-f5qkm] []  <nil>  Unable to connect to the server: dial tcp 35.184.201.237:443: i/o timeout
     [] <nil> 0xc4211ff140 exit status 1 <nil> <nil> true [0xc420c9ae98 0xc420c9aeb0 0xc420c9aec8] [0xc420c9ae98 0xc420c9aeb0 0xc420c9aec8] [0xc420c9aea8 0xc420c9aec0] [0x8dae50 0x8dae50] 0xc4211f0900 <nil>}:
    Command stdout:
    
    stderr:
    Unable to connect to the server: dial tcp 35.184.201.237:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2113

Issues about this test specifically: #27014 #27834

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-6b1ee78a-186c-11e7-9ddb-0242ac110003  us-central1-f  10       pd-ssd  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:948
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.201.237 --kubeconfig=/workspace/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-v40t0] []  0xc420dcfce0  warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when stopping \"STDIN\": Get https://35.184.201.237/api/v1/namespaces/e2e-tests-kubectl-v40t0/pods/pause: dial tcp 35.184.201.237:443: i/o timeout\n [] <nil> 0xc4215afb00 exit status 1 <nil> <nil> true [0xc4200366b8 0xc4200366e0 0xc4200366f0] [0xc4200366b8 0xc4200366e0 0xc4200366f0] [0xc4200366c0 0xc4200366d8 0xc4200366e8] [0x8dad50 0x8dae50 0x8dae50] 0xc4215b06c0 <nil>}:\nCommand stdout:\n\nstderr:\nwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when stopping \"STDIN\": Get https://35.184.201.237/api/v1/namespaces/e2e-tests-kubectl-v40t0/pods/pause: dial tcp 35.184.201.237:443: i/o timeout\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.201.237 --kubeconfig=/workspace/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-v40t0] []  0xc420dcfce0  warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
    error: error when stopping "STDIN": Get https://35.184.201.237/api/v1/namespaces/e2e-tests-kubectl-v40t0/pods/pause: dial tcp 35.184.201.237:443: i/o timeout
     [] <nil> 0xc4215afb00 exit status 1 <nil> <nil> true [0xc4200366b8 0xc4200366e0 0xc4200366f0] [0xc4200366b8 0xc4200366e0 0xc4200366f0] [0xc4200366c0 0xc4200366d8 0xc4200366e8] [0x8dad50 0x8dae50 0x8dae50] 0xc4215b06c0 <nil>}:
    Command stdout:
    
    stderr:
    warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
    error: error when stopping "STDIN": Get https://35.184.201.237/api/v1/namespaces/e2e-tests-kubectl-v40t0/pods/pause: dial tcp 35.184.201.237:443: i/o timeout
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2113

Issues about this test specifically: #28493 #29964

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6223/
Multiple broken tests:

Failed: [k8s.io] Volumes [Volume] [k8s.io] PD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:851
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-53591020-188c-11e7-bdfc-0242ac110007' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-q56n', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-53591020-188c-11e7-bdfc-0242ac110007' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-q56n', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:540

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:129
Expected error:
    <*errors.errorString | 0xc4210c87c0>: {
        s: "unexpected hostname () and stateful pod name (ss-2) not equal",
    }
    unexpected hostname () and stateful pod name (ss-2) not equal
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:110

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-53591020-188c-11e7-bdfc-0242ac110007  us-central1-f  10       pd-ssd  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6256/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Volumes [Volume] [k8s.io] PD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:851
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-ab79d532-1923-11e7-bc81-0242ac110002' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-crwf', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-ab79d532-1923-11e7-bc81-0242ac110002' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-crwf', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:540

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:276
wait for pod "kube-proxy-mode-detector" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc4204509b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:148

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:53
Expected error:
    <*errors.errorString | 0xc4207f14f0>: {
        s: "expected \"content of file \\\"/etc/configmap-volume/data-1\\\": value-1\" in container output: Expected\n    <string>: mode of file \"/etc/configmap-volume/data-1\": -rw-r--r--\n    kubernetes.io/config.seen=\"2017-04-04T10:35:30.629382689Z\"\n    \nto contain substring\n    <string>: content of file \"/etc/configmap-volume/data-1\": value-1",
    }
    expected "content of file \"/etc/configmap-volume/data-1\": value-1" in container output: Expected
        <string>: mode of file "/etc/configmap-volume/data-1": -rw-r--r--
        kubernetes.io/config.seen="2017-04-04T10:35:30.629382689Z"
        
    to contain substring
        <string>: content of file "/etc/configmap-volume/data-1": value-1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2213

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-ab79d532-1923-11e7-bc81-0242ac110002  us-central1-f  10       pd-ssd  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6284/
Multiple broken tests:

Failed: [k8s.io] Projected should update annotations on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:917
Timed out after 120.002s.
Expected
    <string>: content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-05T01:38:44.600890714Z"
    kubernetes.io/config.source="api"
    
to contain substring
    <string>: builder="foo"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:916

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+bootstrap-e2e-e7f2e8be-199f-11e7-93d9-0242ac110008  us-central1-f  10       pd-ssd  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454 #42510 #43153

Failed: [k8s.io] Volumes [Volume] [k8s.io] PD should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volumes.go:851
Error deleting PD
Expected error:
    <volume.deletedVolumeInUseError>: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-e7f2e8be-199f-11e7-93d9-0242ac110008' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-f2g9', resourceInUseByAnotherResource
    googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/disks/bootstrap-e2e-e7f2e8be-199f-11e7-93d9-0242ac110008' is already being used by 'projects/k8s-jkns-gci-gce-protobuf/zones/us-central1-f/instances/bootstrap-e2e-minion-group-f2g9', resourceInUseByAnotherResource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:540

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6518/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
Expected error:
    <*errors.errorString | 0xc4203ac2a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:302

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Projected should set DefaultMode on files [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:822
Expected error:
    <*errors.errorString | 0xc420f202b0>: {
        s: "expected \"mode of file \\\"/etc/podname\\\": -r--------\" in container output: Expected\n    <string>: failed to get container status {\"docker\" \"745cb784505fce7a8dea33f69e6dd1757ed6e9ab1fc82f760f5d2f84906aeb79\"}: rpc error: code = 2 desc = Error: No such container: 745cb784505fce7a8dea33f69e6dd1757ed6e9ab1fc82f760f5d2f84906aeb79\nto contain substring\n    <string>: mode of file \"/etc/podname\": -r--------",
    }
    expected "mode of file \"/etc/podname\": -r--------" in container output: Expected
        <string>: failed to get container status {"docker" "745cb784505fce7a8dea33f69e6dd1757ed6e9ab1fc82f760f5d2f84906aeb79"}: rpc error: code = 2 desc = Error: No such container: 745cb784505fce7a8dea33f69e6dd1757ed6e9ab1fc82f760f5d2f84906aeb79
    to contain substring
        <string>: mode of file "/etc/podname": -r--------
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2213

Failed: [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:710
Timed out after 300.001s.
Expected
    <string>: Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    Error reading file /etc/projected-configmap-volumes/create/data-1: open /etc/projected-configmap-volumes/create/data-1: no such file or directory, retrying
    
to contain substring
    <string>: value-1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:707

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
Expected
    <time.Duration>: 63162209701
to be <
    <time.Duration>: 30000000000
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:329

Issues about this test specifically: #36242

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6633/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:69
Expected error:
    <*errors.errorString | 0xc420fd7cf0>: {
        s: "failed to get logs from downwardapi-volume-be4afe5c-1fb1-11e7-b1b6-0242ac110009 for client-container: unknown (get pods downwardapi-volume-be4afe5c-1fb1-11e7-b1b6-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-be4afe5c-1fb1-11e7-b1b6-0242ac110009 for client-container: unknown (get pods downwardapi-volume-be4afe5c-1fb1-11e7-b1b6-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2213

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:429
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.64.158 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-fvfp1 nginx echo running in container] []  <nil>  error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\n [] <nil> 0xc420aceae0 exit status 1 <nil> <nil> true [0xc420cf21a8 0xc420cf21c0 0xc420cf21d8] [0xc420cf21a8 0xc420cf21c0 0xc420cf21d8] [0xc420cf21b8 0xc420cf21d0] [0x12bffa0 0x12bffa0] 0xc420794000 <nil>}:\nCommand stdout:\n\nstderr:\nerror: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.64.158 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-fvfp1 nginx echo running in container] []  <nil>  error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
     [] <nil> 0xc420aceae0 exit status 1 <nil> <nil> true [0xc420cf21a8 0xc420cf21c0 0xc420cf21d8] [0xc420cf21a8 0xc420cf21c0 0xc420cf21d8] [0xc420cf21b8 0xc420cf21d0] [0x12bffa0 0x12bffa0] 0xc420794000 <nil>}:
    Command stdout:
    
    stderr:
    error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2113

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:101
Expected error:
    <*errors.errorString | 0xc420a34cd0>: {
        s: "failed to get logs from var-expansion-bdd1bf77-1fb1-11e7-8bb9-0242ac110009 for dapi-container: unknown (get pods var-expansion-bdd1bf77-1fb1-11e7-8bb9-0242ac110009)",
    }
    failed to get logs from var-expansion-bdd1bf77-1fb1-11e7-8bb9-0242ac110009 for dapi-container: unknown (get pods var-expansion-bdd1bf77-1fb1-11e7-8bb9-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2213

Failed: [k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:409
Expected error:
    <*errors.errorString | 0xc4209c9990>: {
        s: "failed to get logs from pod-projected-configmaps-be74f299-1fb1-11e7-b392-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-be74f299-1fb1-11e7-b392-0242ac110009)",
    }
    failed to get logs from pod-projected-configmaps-be74f299-1fb1-11e7-b392-0242ac110009 for projected-configmap-volume-test: unknown (get pods pod-projected-configmaps-be74f299-1fb1-11e7-b392-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2213

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:510
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.64.158 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-qfkd2 exec nginx -- /bin/sh -c exit 0] []  <nil>  error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\n [] <nil> 0xc420d0cd80 exit status 1 <nil> <nil> true [0xc4212f2808 0xc4212f2820 0xc4212f2838] [0xc4212f2808 0xc4212f2820 0xc4212f2838] [0xc4212f2818 0xc4212f2830] [0x12bffa0 0x12bffa0] 0xc420fecc00 <nil>}:\nCommand stdout:\n\nstderr:\nerror: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.64.158 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-qfkd2 exec nginx -- /bin/sh -c exit 0] []  <nil>  error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
     [] <nil> 0xc420d0cd80 exit status 1 <nil> <nil> true [0xc4212f2808 0xc4212f2820 0xc4212f2838] [0xc4212f2808 0xc4212f2820 0xc4212f2838] [0xc4212f2818 0xc4212f2830] [0x12bffa0 0x12bffa0] 0xc420fecc00 <nil>}:
    Command stdout:
    
    stderr:
    error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:474

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:204
Expected error:
    <*errors.errorString | 0xc42115b980>: {
        s: "failed to get logs from downwardapi-volume-be3a703b-1fb1-11e7-8ebf-0242ac110009 for client-container: unknown (get pods downwardapi-volume-be3a703b-1fb1-11e7-8ebf-0242ac110009)",
    }
    failed to get logs from downwardapi-volume-be3a703b-1fb1-11e7-8ebf-0242ac110009 for client-container: unknown (get pods downwardapi-volume-be3a703b-1fb1-11e7-8ebf-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2213

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:60
Expected error:
    <*errors.errorString | 0xc4213bf1e0>: {
        s: "failed to get logs from pod-secrets-be8db3ee-1fb1-11e7-9ef2-0242ac110009 for secret-volume-test: unknown (get pods pod-secrets-be8db3ee-1fb1-11e7-9ef2-0242ac110009)",
    }
    failed to get logs from pod-secrets-be8db3ee-1fb1-11e7-9ef2-0242ac110009 for secret-volume-test: unknown (get pods pod-secrets-be8db3ee-1fb1-11e7-9ef2-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2213

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:55
Expected error:
    <*errors.errorString | 0xc42126d790>: {
        s: "failed to get logs from pod-secrets-c5f818df-1fb1-11e7-8609-0242ac110009 for secret-volume-test: unknown (get pods pod-secrets-c5f818df-1fb1-11e7-8609-0242ac110009)",
    }
    failed to get logs from pod-secrets-c5f818df-1fb1-11e7-8609-0242ac110009 for secret-volume-test: unknown (get pods pod-secrets-c5f818df-1fb1-11e7-8609-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2213

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:51
Expected error:
    <*errors.errorString | 0xc4211e43f0>: {
        s: "failed to get logs from pod-secrets-c8825412-1fb1-11e7-9ef2-0242ac110009 for secret-volume-test: unknown (get pods pod-secrets-c8825412-1fb1-11e7-9ef2-0242ac110009)",
    }
    failed to get logs from pod-secrets-c8825412-1fb1-11e7-9ef2-0242ac110009 for secret-volume-test: unknown (get pods pod-secrets-c8825412-1fb1-11e7-9ef2-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2213

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
Expected error:
    <*errors.errorString | 0xc420ae5680>: {
        s: "failed to get logs from pod-c27cf974-1fb1-11e7-bf54-0242ac110009 for test-container: unknown (get pods pod-c27cf974-1fb1-11e7-bf54-0242ac110009)",
    }
    failed to get logs from pod-c27cf974-1fb1-11e7-bf54-0242ac110009 for test-container: unknown (get pods pod-c27cf974-1fb1-11e7-bf54-0242ac110009)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2213

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:466
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.64.158 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-480hd exec nginx echo running in container] [KUBE_NODE_OS_DISTRIBUTION=gci CLOUDSDK_CORE_PRINT_UNHANDLED_TRACEBACKS=1 BUILD_URL=http://goto.google.com/k8s-test/job/ci-kubernetes-e2e-gci-gce-proto/6633/ JENKINS_AWS_SSH_PUBLIC_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-gci-gce-proto@tmp/secretFiles/715f7fa3-bd32-48e4-ac43-7ec9714aeabd/kube_aws_rsa.pub.txt HOSTNAME=3dcfcb1a92e5 GOLANG_VERSION=1.6.3 ROOT_BUILD_CAUSE_TIMERTRIGGER=true CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true KUBERNETES_RELEASE=v1.7.0-alpha.1.357+a4354569937bcd KUBEKINS_TIMEOUT=50m HUDSON_SERVER_COOKIE=02143c9ae5889f5c SHELL=/bin/bash TERM=xterm PROJECT=k8s-jkns-gci-gce-protobuf SSH_CLIENT=10.240.0.37 57411 22 JENKINS_AWS_SSH_PRIVATE_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-gci-gce-proto@tmp/secretFiles/1fad1dc7-2eee-468a-b84f-4f443aad8472/kube_aws_rsa.txt KUBE_GCE_INSTANCE_PREFIX=bootstrap-e2e KUBE_CONFIG_FILE=config-test.sh GOOGLE_APPLICATION_CREDENTIALS=/service-account.json BUILD_TAG=jenkins-ci-kubernetes-e2e-gci-gce-proto-6633 E2E_UP=true CLUSTER_NAME=bootstrap-e2e STORAGE_MEDIA_TYPE=application/vnd.kubernetes.protobuf ROOT_BUILD_CAUSE=TIMERTRIGGER,SCMTRIGGER CLOUDSDK_EXPERIMENTAL_FAST_COMPONENT_UPDATE=false WORKSPACE=/workspace JOB_URL=http://goto.google.com/k8s-test/job/ci-kubernetes-e2e-gci-gce-proto/ KUBERNETES_RELEASE_URL=https://storage.googleapis.com/kubernetes-release-dev/ci JENKINS_AWS_CREDENTIALS_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-gci-gce-proto@tmp/secretFiles/ebb67ae2-9489-4973-a07b-90d9defd9636/KubernetesPostsubmitTests.txt USER=jenkins CLOUDSDK_CONFIG=/workspace/.config/gcloud KUBE_GCE_NETWORK=bootstrap-e2e BUILD_CAUSE_UPSTREAMTRIGGER=true NUM_MIGS=1 KUBERNETES_SKIP_CONFIRM=y E2E_REPORT_DIR=/workspace/_artifacts JENKINS_GCE_SSH_PRIVATE_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-gci-gce-proto@tmp/secretFiles/2570f308-24d8-426f-95fd-962cef0a0c17/google_compute_engine.txt INSTANCE_PREFIX=bootstrap-e2e GINKGO_TEST_ARGS=--ginkgo.skip=\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\] --kube-api-content-type=application/vnd.kubernetes.protobuf KUBE_RUNTIME_CONFIG=batch/v2alpha1=true JENKINS_HOME=/var/lib/jenkins NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat CLOUDSDK_CORE_DISABLE_PROMPTS=1 KUBERNETES_DOWNLOAD_TESTS=y PATH=/workspace/kubernetes/platforms/linux/amd64:/google-cloud-sdk/bin:/workspace:/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ROOT_BUILD_CAUSE_SCMTRIGGER=true MAIL=/var/mail/jenkins PWD=/workspace/kubernetes HUDSON_URL=http://goto.google.com/k8s-test/ LANG=en_US.UTF-8 JOB_NAME=ci-kubernetes-e2e-gci-gce-proto E2E_TEST=true KUBECTL=./cluster/kubectl.sh --match-server-version XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt BUILD_DISPLAY_NAME=#6633 KUBERNETES_PROVIDER=gce BUILD_CAUSE=TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,UPSTREAMTRIGGER JENKINS_URL=http://goto.google.com/k8s-test/ BUILD_ID=6633 GOLANG_DOWNLOAD_SHA256=cdde5e08530c0579255d6153b08fdb3b8e47caabbe717bc7bcd7561275a87aeb KUBE_GKE_NETWORK=bootstrap-e2e JOB_BASE_NAME=ci-kubernetes-e2e-gci-gce-proto E2E_MIN_STARTUP_PODS=8 HOME=/workspace SHLVL=4 PS4=+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): } CLUSTER_API_VERSION=1.7.0-alpha.1.357+a4354569937bcd BOOTSTRAP_MIGRATION=yes no_proxy=127.0.0.1,localhost E2E_NAME=bootstrap-e2e EXECUTOR_NUMBER=4 JENKINS_SERVER_COOKIE=02143c9ae5889f5c KUBE_GCE_ZONE=us-central1-f JENKINS_GCE_SSH_PUBLIC_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-gci-gce-proto@tmp/secretFiles/f70002ad-a095-4389-85e6-b0790b370b1c/google_compute_engine.pub.txt NODE_LABELS=beta.kubernetes.io/fluentd-ds-ready=true GINKGO_PARALLEL=y LOGNAME=jenkins HUDSON_HOME=/var/lib/jenkins SSH_CONNECTION=10.240.0.37 57411 10.240.0.21 22 BUILD_CAUSE_TIMERTRIGGER=true NODE_NAME=agent-light-16 GOPATH=/go BUILD_NUMBER=6633 KUBE_AWS_INSTANCE_PREFIX=bootstrap-e2e E2E_PUBLISH_PATH= KUBERNETES_SKIP_CREATE_CLUSTER=y HUDSON_COOKIE=3f6e5389-292f-42df-8381-666bcec4800f FAIL_ON_GCP_RESOURCE_LEAK=true E2E_DOWN=true GOLANG_DOWNLOAD_URL=https://golang.org/dl/go1.6.3.linux-amd64.tar.gz _=/workspace/kubernetes/platforms/linux/amd64/ginkgo https_proxy=http://127.0.0.1:54985]  <nil>  error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy)\n [] <nil> 0xc420aa2b70 exit status 1 <nil> <nil> true [0xc420c14268 0xc420c14280 0xc420c14298] [0xc420c14268 0xc420c14280 0xc420c14298] [0xc420c14278 0xc420c14290] [0x12bffa0 0x12bffa0] 0xc420edf9e0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy)\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.64.158 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-480hd exec nginx echo running in container] [KUBE_NODE_OS_DISTRIBUTION=gci CLOUDSDK_CORE_PRINT_UNHANDLED_TRACEBACKS=1 BUILD_URL=http://goto.google.com/k8s-test/job/ci-kubernetes-e2e-gci-gce-proto/6633/ JENKINS_AWS_SSH_PUBLIC_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-gci-gce-proto@tmp/secretFiles/715f7fa3-bd32-48e4-ac43-7ec9714aeabd/kube_aws_rsa.pub.txt HOSTNAME=3dcfcb1a92e5 GOLANG_VERSION=1.6.3 ROOT_BUILD_CAUSE_TIMERTRIGGER=true CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true KUBERNETES_RELEASE=v1.7.0-alpha.1.357+a4354569937bcd KUBEKINS_TIMEOUT=50m HUDSON_SERVER_COOKIE=02143c9ae5889f5c SHELL=/bin/bash TERM=xterm PROJECT=k8s-jkns-gci-gce-protobuf SSH_CLIENT=10.240.0.37 57411 22 JENKINS_AWS_SSH_PRIVATE_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-gci-gce-proto@tmp/secretFiles/1fad1dc7-2eee-468a-b84f-4f443aad8472/kube_aws_rsa.txt KUBE_GCE_INSTANCE_PREFIX=bootstrap-e2e KUBE_CONFIG_FILE=config-test.sh GOOGLE_APPLICATION_CREDENTIALS=/service-account.json BUILD_TAG=jenkins-ci-kubernetes-e2e-gci-gce-proto-6633 E2E_UP=true CLUSTER_NAME=bootstrap-e2e STORAGE_MEDIA_TYPE=application/vnd.kubernetes.protobuf ROOT_BUILD_CAUSE=TIMERTRIGGER,SCMTRIGGER CLOUDSDK_EXPERIMENTAL_FAST_COMPONENT_UPDATE=false WORKSPACE=/workspace JOB_URL=http://goto.google.com/k8s-test/job/ci-kubernetes-e2e-gci-gce-proto/ KUBERNETES_RELEASE_URL=https://storage.googleapis.com/kubernetes-release-dev/ci JENKINS_AWS_CREDENTIALS_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-gci-gce-proto@tmp/secretFiles/ebb67ae2-9489-4973-a07b-90d9defd9636/KubernetesPostsubmitTests.txt USER=jenkins CLOUDSDK_CONFIG=/workspace/.config/gcloud KUBE_GCE_NETWORK=bootstrap-e2e BUILD_CAUSE_UPSTREAMTRIGGER=true NUM_MIGS=1 KUBERNETES_SKIP_CONFIRM=y E2E_REPORT_DIR=/workspace/_artifacts JENKINS_GCE_SSH_PRIVATE_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-gci-gce-proto@tmp/secretFiles/2570f308-24d8-426f-95fd-962cef0a0c17/google_compute_engine.txt INSTANCE_PREFIX=bootstrap-e2e GINKGO_TEST_ARGS=--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf KUBE_RUNTIME_CONFIG=batch/v2alpha1=true JENKINS_HOME=/var/lib/jenkins NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat CLOUDSDK_CORE_DISABLE_PROMPTS=1 KUBERNETES_DOWNLOAD_TESTS=y PATH=/workspace/kubernetes/platforms/linux/amd64:/google-cloud-sdk/bin:/workspace:/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ROOT_BUILD_CAUSE_SCMTRIGGER=true MAIL=/var/mail/jenkins PWD=/workspace/kubernetes HUDSON_URL=http://goto.google.com/k8s-test/ LANG=en_US.UTF-8 JOB_NAME=ci-kubernetes-e2e-gci-gce-proto E2E_TEST=true KUBECTL=./cluster/kubectl.sh --match-server-version XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt BUILD_DISPLAY_NAME=#6633 KUBERNETES_PROVIDER=gce BUILD_CAUSE=TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,TIMERTRIGGER,UPSTREAMTRIGGER JENKINS_URL=http://goto.google.com/k8s-test/ BUILD_ID=6633 GOLANG_DOWNLOAD_SHA256=cdde5e08530c0579255d6153b08fdb3b8e47caabbe717bc7bcd7561275a87aeb KUBE_GKE_NETWORK=bootstrap-e2e JOB_BASE_NAME=ci-kubernetes-e2e-gci-gce-proto E2E_MIN_STARTUP_PODS=8 HOME=/workspace SHLVL=4 PS4=+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): } CLUSTER_API_VERSION=1.7.0-alpha.1.357+a4354569937bcd BOOTSTRAP_MIGRATION=yes no_proxy=127.0.0.1,localhost E2E_NAME=bootstrap-e2e EXECUTOR_NUMBER=4 JENKINS_SERVER_COOKIE=02143c9ae5889f5c KUBE_GCE_ZONE=us-central1-f JENKINS_GCE_SSH_PUBLIC_KEY_FILE=/var/lib/jenkins/workspace/ci-kubernetes-e2e-gci-gce-proto@tmp/secretFiles/f70002ad-a095-4389-85e6-b0790b370b1c/google_compute_engine.pub.txt NODE_LABELS=beta.kubernetes.io/fluentd-ds-ready=true GINKGO_PARALLEL=y LOGNAME=jenkins HUDSON_HOME=/var/lib/jenkins SSH_CONNECTION=10.240.0.37 57411 10.240.0.21 22 BUILD_CAUSE_TIMERTRIGGER=true NODE_NAME=agent-light-16 GOPATH=/go BUILD_NUMBER=6633 KUBE_AWS_INSTANCE_PREFIX=bootstrap-e2e E2E_PUBLISH_PATH= KUBERNETES_SKIP_CREATE_CLUSTER=y HUDSON_COOKIE=3f6e5389-292f-42df-8381-666bcec4800f FAIL_ON_GCP_RESOURCE_LEAK=true E2E_DOWN=true GOLANG_DOWNLOAD_URL=https://golang.org/dl/go1.6.3.linux-amd64.tar.gz _=/workspace/kubernetes/platforms/linux/amd64/ginkgo https_proxy=http://127.0.0.1:54985]  <nil>  error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy)
     [] <nil> 0xc420aa2b70 exit status 1 <nil> <nil> true [0xc420c14268 0xc420c14280 0xc420c14298] [0xc420c14268 0xc420c14280 0xc420c14298] [0xc420c14278 0xc420c14290] [0x12bffa0 0x12bffa0] 0xc420edf9e0 <nil>}:
    Command stdout:
    
    stderr:
    error: unable to upgrade connection: Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy)
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2113

Issues about this test specifically: #27156 #28979 #30489 #33649

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6728/
Multiple broken tests:

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:501
Apr 14 15:46:38.183: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:189

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:129
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.64.158 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-422k6 ss-0 -- /bin/sh -c echo $(hostname) > /data/hostname; sync;] []  <nil>  error: unable to upgrade connection: container not found (\"nginx\")\n [] <nil> 0xc421497980 exit status 1 <nil> <nil> true [0xc420e961b0 0xc420e961c8 0xc420e961e0] [0xc420e961b0 0xc420e961c8 0xc420e961e0] [0xc420e961c0 0xc420e961d8] [0x12769a0 0x12769a0] 0xc4212aeba0 <nil>}:\nCommand stdout:\n\nstderr:\nerror: unable to upgrade connection: container not found (\"nginx\")\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.64.158 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-statefulset-422k6 ss-0 -- /bin/sh -c echo $(hostname) > /data/hostname; sync;] []  <nil>  error: unable to upgrade connection: container not found ("nginx")
     [] <nil> 0xc421497980 exit status 1 <nil> <nil> true [0xc420e961b0 0xc420e961c8 0xc420e961e0] [0xc420e961b0 0xc420e961c8 0xc420e961e0] [0xc420e961c0 0xc420e961d8] [0x12769a0 0x12769a0] 0xc4212aeba0 <nil>}:
    Command stdout:
    
    stderr:
    error: unable to upgrade connection: container not found ("nginx")
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:117

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:510
Apr 14 15:46:38.182: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:189

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6801/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Apr 16 05:42:38.326: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.180.3.107:8080/dial?request=hostName&protocol=http&host=10.180.2.80&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:217

Issues about this test specifically: #32375

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:191
Expected error:
    <*errors.errorString | 0xc421185c80>: {
        s: "gave up waiting for pod 'pvc-tester-tn8pn' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pvc-tester-tn8pn' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:395

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:64
Expected error:
    <*errors.StatusError | 0xc42176c800>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'read tcp 10.128.0.2:58732->10.128.0.3:4194: read: connection reset by peer'\\nTrying to reach: 'http://bootstrap-e2e-minion-group-q1z2:4194/containers/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'read tcp 10.128.0.2:58732->10.128.0.3:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-q1z2:4194/containers/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'read tcp 10.128.0.2:58732->10.128.0.3:4194: read: connection reset by peer'\nTrying to reach: 'http://bootstrap-e2e-minion-group-q1z2:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:327

Issues about this test specifically: #35297

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:242
Expected error:
    <*errors.errorString | 0xc420b26920>: {
        s: "gave up waiting for pod 'pvc-tester-fq01z' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pvc-tester-fq01z' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:395

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:375
Expected error:
    <*errors.errorString | 0xc421006040>: {
        s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running",
    }
    Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1719

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/6831/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:641
Expected
    <string>: 
to contain substring
    <string>: No resources found
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:635

Failed: [k8s.io] Projected should update annotations on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:917
Timed out after 120.001s.
Expected
    <string>: content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-04-17T03:56:58.817105091Z"
    kubernetes.io/config.source="api"
    
to contain substring
    <string>: builder="foo"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:916

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:242
Expected error:
    <*errors.errorString | 0xc420bbcae0>: {
        s: "gave up waiting for pod 'pvc-tester-0q7bn' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'pvc-tester-0q7bn' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv_util.go:395

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/7070/
Multiple broken tests:

Failed: [k8s.io] Projected should project all components that make up the projection API [Conformance] [Volume] [Projection] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:1023
Expected error:
    <*errors.errorString | 0xc420800660>: {
        s: "expected \"projected-volume-42a8a93e-2728-11e7-97d0-0242ac110005\" in container output: Expected\n    <string>: \nto contain substring\n    <string>: projected-volume-42a8a93e-2728-11e7-97d0-0242ac110005",
    }
    expected "projected-volume-42a8a93e-2728-11e7-97d0-0242ac110005" in container output: Expected
        <string>: 
    to contain substring
        <string>: projected-volume-42a8a93e-2728-11e7-97d0-0242ac110005
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2219

Failed: [k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:199
Expected error:
    <*errors.errorString | 0xc420ffe650>: {
        s: "pod \"pvc-tester-k2h8h\" did not exit with Success: pod \"pvc-tester-k2h8h\" failed to reach Success: gave up waiting for pod 'pvc-tester-k2h8h' to be 'success or failure' after 5m0s",
    }
    pod "pvc-tester-k2h8h" did not exit with Success: pod "pvc-tester-k2h8h" failed to reach Success: gave up waiting for pod 'pvc-tester-k2h8h' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:44

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
Apr 22 00:06:54.755: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/7246/
Multiple broken tests:

Failed: [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:339
Expected error:
    <*errors.errorString | 0xc420410020>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:66
Expected error:
    <*errors.errorString | 0xc420bec000>: {
        s: "expected pod \"pod-configmaps-9249cefa-29f8-11e7-af2d-0242ac11000a\" success: gave up waiting for pod 'pod-configmaps-9249cefa-29f8-11e7-af2d-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-9249cefa-29f8-11e7-af2d-0242ac11000a" success: gave up waiting for pod 'pod-configmaps-9249cefa-29f8-11e7-af2d-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Failed: [k8s.io] Projected should provide container's memory limit [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:935
Expected error:
    <*errors.errorString | 0xc4213c3ce0>: {
        s: "expected pod \"downwardapi-volume-54809161-29f9-11e7-af2d-0242ac11000a\" success: gave up waiting for pod 'downwardapi-volume-54809161-29f9-11e7-af2d-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-54809161-29f9-11e7-af2d-0242ac11000a" success: gave up waiting for pod 'downwardapi-volume-54809161-29f9-11e7-af2d-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93
Expected error:
    <*errors.errorString | 0xc420e75340>: {
        s: "expected pod \"pod-8c3cb33e-29f9-11e7-85e5-0242ac11000a\" success: gave up waiting for pod 'pod-8c3cb33e-29f9-11e7-85e5-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-8c3cb33e-29f9-11e7-85e5-0242ac11000a" success: gave up waiting for pod 'pod-8c3cb33e-29f9-11e7-85e5-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:243
Expected error:
    <*errors.errorString | 0xc420ea6d60>: {
        s: "expected pod \"pod-service-account-9eef1bb7-29f8-11e7-833b-0242ac11000a-ptphs\" success: gave up waiting for pod 'pod-service-account-9eef1bb7-29f8-11e7-833b-0242ac11000a-ptphs' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-service-account-9eef1bb7-29f8-11e7-833b-0242ac11000a-ptphs" success: gave up waiting for pod 'pod-service-account-9eef1bb7-29f8-11e7-833b-0242ac11000a-ptphs' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Issues about this test specifically: #37526

Failed: [k8s.io] Services should serve a basic endpoint from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:131
Apr 25 13:53:52.863: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-8gtdc to expose endpoints map[pod1:[80] pod2:[80]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #26678 #29318

Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:166
Expected error:
    <*errors.errorString | 0xc4210840d0>: {
        s: "Timeout while waiting for pods with labels \"k8s-app=nginx-ingress-lb\" to be running",
    }
    Timeout while waiting for pods with labels "k8s-app=nginx-ingress-lb" to be running
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ingress_utils.go:1059

Issues about this test specifically: #38556

Failed: [k8s.io] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:52
Expected error:
    <*errors.errorString | 0xc420dfea20>: {
        s: "expected pod \"pod-projected-secrets-a9d8f0fa-29f8-11e7-ab47-0242ac11000a\" success: gave up waiting for pod 'pod-projected-secrets-a9d8f0fa-29f8-11e7-ab47-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-projected-secrets-a9d8f0fa-29f8-11e7-ab47-0242ac11000a" success: gave up waiting for pod 'pod-projected-secrets-a9d8f0fa-29f8-11e7-ab47-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:341
Expected error:
    <*errors.errorString | 0xc4204745c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:240

Issues about this test specifically: #26194 #26338 #30345 #34571 #43101

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:181
Expected error:
    <*errors.errorString | 0xc4210c5ac0>: {
        s: "expected pod \"downwardapi-volume-9b78048b-29f9-11e7-aae4-0242ac11000a\" success: gave up waiting for pod 'downwardapi-volume-9b78048b-29f9-11e7-aae4-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-9b78048b-29f9-11e7-aae4-0242ac11000a" success: gave up waiting for pod 'downwardapi-volume-9b78048b-29f9-11e7-aae4-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:49
Expected error:
    <*errors.errorString | 0xc421301f80>: {
        s: "expected pod \"downwardapi-volume-9226ddd4-29f8-11e7-b57e-0242ac11000a\" success: gave up waiting for pod 'downwardapi-volume-9226ddd4-29f8-11e7-b57e-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-9226ddd4-29f8-11e7-b57e-0242ac11000a" success: gave up waiting for pod 'downwardapi-volume-9226ddd4-29f8-11e7-b57e-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Failed: [k8s.io] Projected should set mode on item file [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:832
Expected error:
    <*errors.errorString | 0xc42116dce0>: {
        s: "expected pod \"downwardapi-volume-a8b6b154-29f8-11e7-98d7-0242ac11000a\" success: gave up waiting for pod 'downwardapi-volume-a8b6b154-29f8-11e7-98d7-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-a8b6b154-29f8-11e7-98d7-0242ac11000a" success: gave up waiting for pod 'downwardapi-volume-a8b6b154-29f8-11e7-98d7-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:704
Expected
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.10.77 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-n2xv5 nginx -- /bin/sh -c /tmp/kubectl create -f /tmp/invalid-configmap-with-namespace.yaml --v=7 2>&1] []  <nil> I0425 20:55:57.591026      52 merged_client_builder.go:123] Using in-cluster configuration\nI0425 20:55:57.591989      52 round_trippers.go:395] GET https://10.0.0.1:443/version\nI0425 20:55:57.592031      52 round_trippers.go:402] Request Headers:\nI0425 20:55:57.592046      52 round_trippers.go:405]     Accept: application/json, */*\nI0425 20:55:57.592056      52 round_trippers.go:405]     User-Agent: kubectl/v1.7.0 (linux/amd64) kubernetes/896d2af\nI0425 20:55:57.592066      52 round_trippers.go:405]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMta3ViZWN0bC1uMnh2NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLTd4dzk5Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4OGQ1ZWZhZi0yOWY5LTExZTctOGU2ZC00MjAxMGE4MDAwMDIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZTJlLXRlc3RzLWt1YmVjdGwtbjJ4djU6ZGVmYXVsdCJ9.m-n1wL90MWBvyMTw4WXejp5nUBpuXOTluIAvYrBa3w1nEQLIiWCK0ow606iYXGDCycpydlUeyLb70jDfjSqfg_QXEGJPfFcFkSd0xQrK_Rd2X7Emf7CshVpyIMxMelZsik5-HsODrMlLvBrHsCKwk2LyQOvmGVzBlR1ZKnDVqeTJ4PQPqDgywTZJlQYmdFG5IwPVVlFtdjA6vtAIZuJ-mQ17oqfoxD_Jl-fJN0iX-ODOg1qBAQlqStvoDJ0rtPAEuypu1uq-7VexVgsIYXEc-N80glB2W6MPEoEuBbb2r2rQGL571f89BjWmWNHdsFFciWm6ZWKdv2wrHE28DsLCSQ\nI0425 20:55:57.630122      52 round_trippers.go:420] Response Status: 200 OK in 38 milliseconds\nI0425 20:55:57.630688      52 merged_client_builder.go:160] Using in-cluster namespace\nI0425 20:55:57.631436      52 merged_client_builder.go:123] Using in-cluster configuration\nI0425 20:55:57.632764      52 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/servergroups.json\nI0425 20:55:57.633728      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.634108      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1/serverresources.json\nI0425 20:55:57.634680      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.635042      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1/serverresources.json\nI0425 20:55:57.635377      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.635757      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/autoscaling/v1/serverresources.json\nI0425 20:55:57.636281      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v1/serverresources.json\nI0425 20:55:57.637022      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v2alpha1/serverresources.json\nI0425 20:55:57.637336      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1/serverresources.json\nI0425 20:55:57.637664      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.637988      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.638331      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.638711      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/settings.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.639066      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/certificates.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.639567      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/extensions/v1beta1/serverresources.json\nI0425 20:55:57.639884      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/policy/v1beta1/serverresources.json\nI0425 20:55:57.640413      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apps/v1beta1/serverresources.json\nI0425 20:55:57.642557      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/v1/serverresources.json\nI0425 20:55:57.642957      52 merged_client_builder.go:123] Using in-cluster configuration\nI0425 20:55:57.643432      52 decoder.go:224] decoding stream as YAML\nI0425 20:55:57.643796      52 round_trippers.go:395] GET https://10.0.0.1:443/swaggerapi/api/v1\nI0425 20:55:57.643816      52 round_trippers.go:402] Request Headers:\nI0425 20:55:57.643827      52 round_trippers.go:405]     Accept: application/json, */*\nI0425 20:55:57.643838      52 round_trippers.go:405]     User-Agent: kubectl/v1.7.0 (linux/amd64) kubernetes/896d2af\nI0425 20:55:57.643846      52 round_trippers.go:405]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMta3ViZWN0bC1uMnh2NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLTd4dzk5Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4OGQ1ZWZhZi0yOWY5LTExZTctOGU2ZC00MjAxMGE4MDAwMDIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZTJlLXRlc3RzLWt1YmVjdGwtbjJ4djU6ZGVmYXVsdCJ9.m-n1wL90MWBvyMTw4WXejp5nUBpuXOTluIAvYrBa3w1nEQLIiWCK0ow606iYXGDCycpydlUeyLb70jDfjSqfg_QXEGJPfFcFkSd0xQrK_Rd2X7Emf7CshVpyIMxMelZsik5-HsODrMlLvBrHsCKwk2LyQOvmGVzBlR1ZKnDVqeTJ4PQPqDgywTZJlQYmdFG5IwPVVlFtdjA6vtAIZuJ-mQ17oqfoxD_Jl-fJN0iX-ODOg1qBAQlqStvoDJ0rtPAEuypu1uq-7VexVgsIYXEc-N80glB2W6MPEoEuBbb2r2rQGL571f89BjWmWNHdsFFciWm6ZWKdv2wrHE28DsLCSQ\nI0425 20:55:57.677600      52 round_trippers.go:420] Response Status: 200 OK in 33 milliseconds\nI0425 20:55:57.777945      52 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/servergroups.json\nI0425 20:55:57.778868      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.779291      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1/serverresources.json\nI0425 20:55:57.779648      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.780134      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1/serverresources.json\nI0425 20:55:57.780594      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.781157      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/autoscaling/v1/serverresources.json\nI0425 20:55:57.781527      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v1/serverresources.json\nI0425 20:55:57.781933      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v2alpha1/serverresources.json\nI0425 20:55:57.782293      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1/serverresources.json\nI0425 20:55:57.782720      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.783132      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.783534      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.783849      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/settings.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.784229      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/certificates.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.785364      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/extensions/v1beta1/serverresources.json\nI0425 20:55:57.785580      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/policy/v1beta1/serverresources.json\nI0425 20:55:57.785836      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apps/v1beta1/serverresources.json\nI0425 20:55:57.786692      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/v1/serverresources.json\nI0425 20:55:57.789159      52 merged_client_builder.go:123] Using in-cluster configuration\nI0425 20:55:57.789758      52 round_trippers.go:395] POST https://10.0.0.1:443/api/v1/namespaces/configmap-namespace/configmaps\nI0425 20:55:57.789781      52 round_trippers.go:402] Request Headers:\nI0425 20:55:57.789793      52 round_trippers.go:405]     Content-Type: application/json\nI0425 20:55:57.789802      52 round_trippers.go:405]     Accept: application/json\nI0425 20:55:57.789811      52 round_trippers.go:405]     User-Agent: kubectl/v1.7.0 (linux/amd64) kubernetes/896d2af\nI0425 20:55:57.789818      52 round_trippers.go:405]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMta3ViZWN0bC1uMnh2NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLTd4dzk5Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4OGQ1ZWZhZi0yOWY5LTExZTctOGU2ZC00MjAxMGE4MDAwMDIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZTJlLXRlc3RzLWt1YmVjdGwtbjJ4djU6ZGVmYXVsdCJ9.m-n1wL90MWBvyMTw4WXejp5nUBpuXOTluIAvYrBa3w1nEQLIiWCK0ow606iYXGDCycpydlUeyLb70jDfjSqfg_QXEGJPfFcFkSd0xQrK_Rd2X7Emf7CshVpyIMxMelZsik5-HsODrMlLvBrHsCKwk2LyQOvmGVzBlR1ZKnDVqeTJ4PQPqDgywTZJlQYmdFG5IwPVVlFtdjA6vtAIZuJ-mQ17oqfoxD_Jl-fJN0iX-ODOg1qBAQlqStvoDJ0rtPAEuypu1uq-7VexVgsIYXEc-N80glB2W6MPEoEuBbb2r2rQGL571f89BjWmWNHdsFFciWm6ZWKdv2wrHE28DsLCSQ\nI0425 20:55:57.793827      52 round_trippers.go:420] Response Status: 403 Forbidden in 3 milliseconds\nI0425 20:55:57.794623      52 helpers.go:207] server response object: [{\n  \"metadata\": {},\n  \"status\": \"Failure\",\n  \"message\": \"error when creating \\\"/tmp/invalid-configmap-with-namespace.yaml\\\": User \\\"system:serviceaccount:e2e-tests-kubectl-n2xv5:default\\\" cannot create configmaps in the namespace \\\"configmap-namespace\\\". (post configmaps)\",\n  \"reason\": \"Forbidden\",\n  \"details\": {\n    \"kind\": \"configmaps\",\n    \"causes\": [\n      {\n        \"reason\": \"UnexpectedServerResponse\",\n        \"message\": \"User \\\"system:serviceaccount:e2e-tests-kubectl-n2xv5:default\\\" cannot create configmaps in the namespace \\\"configmap-namespace\\\".\"\n      }\n    ]\n  },\n  \"code\": 403\n}]\nF0425 20:55:57.794846      52 helpers.go:120] Error from server (Forbidden): error when creating \"/tmp/invalid-configmap-with-namespace.yaml\": User \"system:serviceaccount:e2e-tests-kubectl-n2xv5:default\" cannot create configmaps in the namespace \"configmap-namespace\". (post configmaps)\n  [] <nil> 0xc4213a6c00 exit status 255 <nil> <nil> true [0xc420090a70 0xc420090af0 0xc420090b58] [0xc420090a70 0xc420090af0 0xc420090b58] [0xc420090ab0 0xc420090b38] [0x12b1ee0 0x12b1ee0] 0xc4207aee40 <nil>}:\nCommand stdout:\nI0425 20:55:57.591026      52 merged_client_builder.go:123] Using in-cluster configuration\nI0425 20:55:57.591989      52 round_trippers.go:395] GET https://10.0.0.1:443/version\nI0425 20:55:57.592031      52 round_trippers.go:402] Request Headers:\nI0425 20:55:57.592046      52 round_trippers.go:405]     Accept: application/json, */*\nI0425 20:55:57.592056      52 round_trippers.go:405]     User-Agent: kubectl/v1.7.0 (linux/amd64) kubernetes/896d2af\nI0425 20:55:57.592066      52 round_trippers.go:405]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMta3ViZWN0bC1uMnh2NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLTd4dzk5Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4OGQ1ZWZhZi0yOWY5LTExZTctOGU2ZC00MjAxMGE4MDAwMDIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZTJlLXRlc3RzLWt1YmVjdGwtbjJ4djU6ZGVmYXVsdCJ9.m-n1wL90MWBvyMTw4WXejp5nUBpuXOTluIAvYrBa3w1nEQLIiWCK0ow606iYXGDCycpydlUeyLb70jDfjSqfg_QXEGJPfFcFkSd0xQrK_Rd2X7Emf7CshVpyIMxMelZsik5-HsODrMlLvBrHsCKwk2LyQOvmGVzBlR1ZKnDVqeTJ4PQPqDgywTZJlQYmdFG5IwPVVlFtdjA6vtAIZuJ-mQ17oqfoxD_Jl-fJN0iX-ODOg1qBAQlqStvoDJ0rtPAEuypu1uq-7VexVgsIYXEc-N80glB2W6MPEoEuBbb2r2rQGL571f89BjWmWNHdsFFciWm6ZWKdv2wrHE28DsLCSQ\nI0425 20:55:57.630122      52 round_trippers.go:420] Response Status: 200 OK in 38 milliseconds\nI0425 20:55:57.630688      52 merged_client_builder.go:160] Using in-cluster namespace\nI0425 20:55:57.631436      52 merged_client_builder.go:123] Using in-cluster configuration\nI0425 20:55:57.632764      52 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/servergroups.json\nI0425 20:55:57.633728      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.634108      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1/serverresources.json\nI0425 20:55:57.634680      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.635042      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1/serverresources.json\nI0425 20:55:57.635377      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.635757      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/autoscaling/v1/serverresources.json\nI0425 20:55:57.636281      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v1/serverresources.json\nI0425 20:55:57.637022      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v2alpha1/serverresources.json\nI0425 20:55:57.637336      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1/serverresources.json\nI0425 20:55:57.637664      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.637988      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.638331      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.638711      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/settings.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.639066      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/certificates.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.639567      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/extensions/v1beta1/serverresources.json\nI0425 20:55:57.639884      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/policy/v1beta1/serverresources.json\nI0425 20:55:57.640413      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apps/v1beta1/serverresources.json\nI0425 20:55:57.642557      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/v1/serverresources.json\nI0425 20:55:57.642957      52 merged_client_builder.go:123] Using in-cluster configuration\nI0425 20:55:57.643432      52 decoder.go:224] decoding stream as YAML\nI0425 20:55:57.643796      52 round_trippers.go:395] GET https://10.0.0.1:443/swaggerapi/api/v1\nI0425 20:55:57.643816      52 round_trippers.go:402] Request Headers:\nI0425 20:55:57.643827      52 round_trippers.go:405]     Accept: application/json, */*\nI0425 20:55:57.643838      52 round_trippers.go:405]     User-Agent: kubectl/v1.7.0 (linux/amd64) kubernetes/896d2af\nI0425 20:55:57.643846      52 round_trippers.go:405]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMta3ViZWN0bC1uMnh2NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLTd4dzk5Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4OGQ1ZWZhZi0yOWY5LTExZTctOGU2ZC00MjAxMGE4MDAwMDIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZTJlLXRlc3RzLWt1YmVjdGwtbjJ4djU6ZGVmYXVsdCJ9.m-n1wL90MWBvyMTw4WXejp5nUBpuXOTluIAvYrBa3w1nEQLIiWCK0ow606iYXGDCycpydlUeyLb70jDfjSqfg_QXEGJPfFcFkSd0xQrK_Rd2X7Emf7CshVpyIMxMelZsik5-HsODrMlLvBrHsCKwk2LyQOvmGVzBlR1ZKnDVqeTJ4PQPqDgywTZJlQYmdFG5IwPVVlFtdjA6vtAIZuJ-mQ17oqfoxD_Jl-fJN0iX-ODOg1qBAQlqStvoDJ0rtPAEuypu1uq-7VexVgsIYXEc-N80glB2W6MPEoEuBbb2r2rQGL571f89BjWmWNHdsFFciWm6ZWKdv2wrHE28DsLCSQ\nI0425 20:55:57.677600      52 round_trippers.go:420] Response Status: 200 OK in 33 milliseconds\nI0425 20:55:57.777945      52 cached_discovery.go:118] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/servergroups.json\nI0425 20:55:57.778868      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apiregistration.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.779291      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1/serverresources.json\nI0425 20:55:57.779648      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authentication.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.780134      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1/serverresources.json\nI0425 20:55:57.780594      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/authorization.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.781157      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/autoscaling/v1/serverresources.json\nI0425 20:55:57.781527      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v1/serverresources.json\nI0425 20:55:57.781933      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/batch/v2alpha1/serverresources.json\nI0425 20:55:57.782293      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1/serverresources.json\nI0425 20:55:57.782720      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/storage.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.783132      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.783534      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/rbac.authorization.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.783849      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/settings.k8s.io/v1alpha1/serverresources.json\nI0425 20:55:57.784229      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/certificates.k8s.io/v1beta1/serverresources.json\nI0425 20:55:57.785364      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/extensions/v1beta1/serverresources.json\nI0425 20:55:57.785580      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/policy/v1beta1/serverresources.json\nI0425 20:55:57.785836      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/apps/v1beta1/serverresources.json\nI0425 20:55:57.786692      52 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.0.0.1_443/v1/serverresources.json\nI0425 20:55:57.789159      52 merged_client_builder.go:123] Using in-cluster configuration\nI0425 20:55:57.789758      52 round_trippers.go:395] POST https://10.0.0.1:443/api/v1/namespaces/configmap-namespace/configmaps\nI0425 20:55:57.789781      52 round_trippers.go:402] Request Headers:\nI0425 20:55:57.789793      52 round_trippers.go:405]     Content-Type: application/json\nI0425 20:55:57.789802      52 round_trippers.go:405]     Accept: application/json\nI0425 20:55:57.789811      52 round_trippers.go:405]     User-Agent: kubectl/v1.7.0 (linux/amd64) kubernetes/896d2af\nI0425 20:55:57.789818      52 round_trippers.go:405]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJlMmUtdGVzdHMta3ViZWN0bC1uMnh2NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLTd4dzk5Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4OGQ1ZWZhZi0yOWY5LTExZTctOGU2ZC00MjAxMGE4MDAwMDIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZTJlLXRlc3RzLWt1YmVjdGwtbjJ4djU6ZGVmYXVsdCJ9.m-n1wL90MWBvyMTw4WXejp5nUBpuXOTluIAvYrBa3w1nEQLIiWCK0ow606iYXGDCycpydlUeyLb70jDfjSqfg_QXEGJPfFcFkSd0xQrK_Rd2X7Emf7CshVpyIMxMelZsik5-HsODrMlLvBrHsCKwk2LyQOvmGVzBlR1ZKnDVqeTJ4PQPqDgywTZJlQYmdFG5IwPVVlFtdjA6vtAIZuJ-mQ17oqfoxD_Jl-fJN0iX-ODOg1qBAQlqStvoDJ0rtPAEuypu1uq-7VexVgsIYXEc-N80glB2W6MPEoEuBbb2r2rQGL571f89BjWmWNHdsFFciWm6ZWKdv2wrHE28DsLCSQ\nI0425 20:55:57.793827      52 round_trippers.go:420] Response Status: 403 Forbidden in 3 milliseconds\nI0425 20:55:57.794623      52 helpers.go:207] server response object: [{\n  \"metadata\": {},\n  \"status\": \"Failure\",\n  \"message\": \"error when creating \\\"/tmp/invalid-configmap-with-namespace.yaml\\\": User \\\"system:serviceaccount:e2e-tests-kubectl-n2xv5:default\\\" cannot create configmaps in the namespace \\\"configmap-namespace\\\". (post configmaps)\",\n  \"reason\": \"Forbidden\",\n  \"details\": {\n    \"kind\": \"configmaps\",\n    \"causes\": [\n      {\n        \"reason\": \"UnexpectedServerResponse\",\n        \"message\": \"User \\\"system:serviceaccount:e2e-tests-kubectl-n2xv5:default\\\" cannot create configmaps in the namespace \\\"configmap-namespace\\\".\"\n      }\n    ]\n  },\n  \"code\": 403\n}]\nF0425 20:55:57.794846      52 helpers.go:120] Error from server (Forbidden): error when creating \"/tmp/invalid-configmap-with-namespace.yaml\": User \"system:serviceaccount:e2e-tests-kubectl-n2xv5:default\" cannot create configmaps in the namespace \"configmap-namespace\". (post configmaps)\n\nstderr:\n\nerror:\nexit status 255\n",
        },
        Code: 255,
    }
to contain substring
    <string>: POST https://10.0.0.1:/api/v1/namespaces/configmap-namespace/configmaps
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:668

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:156
Expected error:
    <*errors.errorString | 0xc420429770>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:488
Expected error:
    <*errors.errorString | 0xc4203fce80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:269

Issues about this test specifically: #32584

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:59
Expected error:
    <*errors.errorString | 0xc420aff270>: {
        s: "expected pod \"downwardapi-volume-d304aad0-29f8-11e7-8232-0242ac11000a\" success: gave up waiting for pod 'downwardapi-volume-d304aad0-29f8-11e7-8232-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-d304aad0-29f8-11e7-8232-0242ac11000a" success: gave up waiting for pod 'downwardapi-volume-d304aad0-29f8-11e7-8232-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/7429/
Multiple broken tests:

Failed: [k8s.io] Projected should update labels on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:888
Timed out after 120.001s.
Expected
    <string>: content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    
to contain substring
    <string>: key3="value3"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:887

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:515
Apr 29 12:47:39.137: Missing "^Received expected client data$" from log: Accepted client connection

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:527

Failed: [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should test that deleting a claim before the volume is provisioned deletes the volume. [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:516
Test Panicked
/usr/local/go/src/runtime/asm_amd64.s:514

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/7471/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:704
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.47.132 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-xmpns nginx -- /bin/sh -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1] []  <nil> I0430 16:35:27.578628     111 loader.go:357] Config loaded from file /tmp/icc-override.kubeconfig\nI0430 16:35:57.580669     111 round_trippers.go:405] GET https://kubernetes.default.svc:443/api  in 30000 milliseconds\nI0430 16:35:57.580819     111 cached_discovery.go:126] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout\nI0430 16:35:57.581764     111 helpers.go:225] Connection error: Get https://kubernetes.default.svc:443/api: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout\nF0430 16:35:57.582113     111 helpers.go:120] Unable to connect to the server: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout\n  [] <nil> 0xc4211190b0 exit status 255 <nil> <nil> true [0xc4215b47a0 0xc4215b47b8 0xc4215b47e0] [0xc4215b47a0 0xc4215b47b8 0xc4215b47e0] [0xc4215b47b0 0xc4215b47d8] [0x182d750 0x182d750] 0xc420fd6720 <nil>}:\nCommand stdout:\nI0430 16:35:27.578628     111 loader.go:357] Config loaded from file /tmp/icc-override.kubeconfig\nI0430 16:35:57.580669     111 round_trippers.go:405] GET https://kubernetes.default.svc:443/api  in 30000 milliseconds\nI0430 16:35:57.580819     111 cached_discovery.go:126] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout\nI0430 16:35:57.581764     111 helpers.go:225] Connection error: Get https://kubernetes.default.svc:443/api: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout\nF0430 16:35:57.582113     111 helpers.go:120] Unable to connect to the server: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout\n\nstderr:\n\nerror:\nexit status 255\n",
        },
        Code: 255,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.47.132 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-xmpns nginx -- /bin/sh -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1] []  <nil> I0430 16:35:27.578628     111 loader.go:357] Config loaded from file /tmp/icc-override.kubeconfig
    I0430 16:35:57.580669     111 round_trippers.go:405] GET https://kubernetes.default.svc:443/api  in 30000 milliseconds
    I0430 16:35:57.580819     111 cached_discovery.go:126] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout
    I0430 16:35:57.581764     111 helpers.go:225] Connection error: Get https://kubernetes.default.svc:443/api: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout
    F0430 16:35:57.582113     111 helpers.go:120] Unable to connect to the server: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout
      [] <nil> 0xc4211190b0 exit status 255 <nil> <nil> true [0xc4215b47a0 0xc4215b47b8 0xc4215b47e0] [0xc4215b47a0 0xc4215b47b8 0xc4215b47e0] [0xc4215b47b0 0xc4215b47d8] [0x182d750 0x182d750] 0xc420fd6720 <nil>}:
    Command stdout:
    I0430 16:35:27.578628     111 loader.go:357] Config loaded from file /tmp/icc-override.kubeconfig
    I0430 16:35:57.580669     111 round_trippers.go:405] GET https://kubernetes.default.svc:443/api  in 30000 milliseconds
    I0430 16:35:57.580819     111 cached_discovery.go:126] skipped caching discovery info due to Get https://kubernetes.default.svc:443/api: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout
    I0430 16:35:57.581764     111 helpers.go:225] Connection error: Get https://kubernetes.default.svc:443/api: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout
    F0430 16:35:57.582113     111 helpers.go:120] Unable to connect to the server: dial tcp: lookup kubernetes.default.svc on 10.0.0.10:53: dial udp 10.0.0.10:53: i/o timeout
    
    stderr:
    
    error:
    exit status 255
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3895

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1164
Apr 30 09:48:07.294: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1102

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:488
Expected error:
    <*errors.errorString | 0xc420316950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:223

Issues about this test specifically: #32584

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:45
Apr 30 09:45:13.341: monitoring using heapster and influxdb test failed
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:327

Issues about this test specifically: #29647 #35627 #38293

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
Apr 30 09:53:54.328: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:344

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:375
Apr 30 09:45:11.070: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1810

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812
Apr 30 09:34:10.981: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:809

Issues about this test specifically: #28420 #36122

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-proto/7492/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:704
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.47.132 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-vfchv nginx -- /bin/sh -c /tmp/kubectl get pods --v=7 2>&1] []  <nil> /bin/sh: 1: /tmp/kubectl: not found\n  [] <nil> 0xc421376570 exit status 127 <nil> <nil> true [0xc420aea000 0xc420aea018 0xc420aea038] [0xc420aea000 0xc420aea018 0xc420aea038] [0xc420aea010 0xc420aea030] [0x182d750 0x182d750] 0xc420e481e0 <nil>}:\nCommand stdout:\n/bin/sh: 1: /tmp/kubectl: not found\n\nstderr:\n\nerror:\nexit status 127\n",
        },
        Code: 127,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.188.47.132 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-kubectl-vfchv nginx -- /bin/sh -c /tmp/kubectl get pods --v=7 2>&1] []  <nil> /bin/sh: 1: /tmp/kubectl: not found
      [] <nil> 0xc421376570 exit status 127 <nil> <nil> true [0xc420aea000 0xc420aea018 0xc420aea038] [0xc420aea000 0xc420aea018 0xc420aea038] [0xc420aea010 0xc420aea030] [0x182d750 0x182d750] 0xc420e481e0 <nil>}:
    Command stdout:
    /bin/sh: 1: /tmp/kubectl: not found
    
    stderr:
    
    error:
    exit status 127
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3895

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] [Volume] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:154
Timed out after 120.002s.
Expected
    <string>: content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    content of file "/etc/annotations": builder="bar"
    kubernetes.io/config.seen="2017-05-01T03:06:56.940639083Z"
    kubernetes.io/config.source="api"
    
to contain substring
    <string>: builder="foo"
    
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153

Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:215
Apr 30 20:08:52.232: Timed out waiting for service multi-endpoint-test in namespace e2e-tests-services-602r8 to expose endpoints map[pod1:[100]] (1m0s elapsed)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service_util.go:1028

Issues about this test specifically: #29831

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Apr 30 20:11:34.238: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.100.2.140 8081
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:270

Issues about this test specifically: #35283 #36867

Failed: Test {e2e.go}

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --kube-api-content-type=application/vnd.kubernetes.protobuf: exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 #43025 #44541

Failed: TearDown {e2e.go}

error during ./hack/e2e-internal/e2e-down.sh: signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:423
Apr 30 20:19:21.495: Failed waiting for stateful set status.replicas updated to 0: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:382

@k8s-github-robot
Copy link
Author

This Issue hasn't been active in 35 days. It will be closed in 54 days (Jun 29, 2017).

cc @k8s-merge-robot @wojtek-t

You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. kind/flake Categorizes issue or PR as related to a flaky test. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests

6 participants